Oct  9 06:13:50 np0005478418 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  9 06:13:50 np0005478418 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 06:13:50 np0005478418 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 06:13:50 np0005478418 kernel: BIOS-provided physical RAM map:
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 06:13:50 np0005478418 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  9 06:13:50 np0005478418 kernel: NX (Execute Disable) protection: active
Oct  9 06:13:50 np0005478418 kernel: APIC: Static calls initialized
Oct  9 06:13:50 np0005478418 kernel: SMBIOS 2.8 present.
Oct  9 06:13:50 np0005478418 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  9 06:13:50 np0005478418 kernel: Hypervisor detected: KVM
Oct  9 06:13:50 np0005478418 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 06:13:50 np0005478418 kernel: kvm-clock: using sched offset of 4624171973 cycles
Oct  9 06:13:50 np0005478418 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 06:13:50 np0005478418 kernel: tsc: Detected 2800.000 MHz processor
Oct  9 06:13:50 np0005478418 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  9 06:13:50 np0005478418 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 06:13:50 np0005478418 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 06:13:50 np0005478418 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  9 06:13:50 np0005478418 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  9 06:13:50 np0005478418 kernel: Using GB pages for direct mapping
Oct  9 06:13:50 np0005478418 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 06:13:50 np0005478418 kernel: ACPI: Early table checksum verification disabled
Oct  9 06:13:50 np0005478418 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  9 06:13:50 np0005478418 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 06:13:50 np0005478418 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 06:13:50 np0005478418 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 06:13:50 np0005478418 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  9 06:13:50 np0005478418 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 06:13:50 np0005478418 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 06:13:50 np0005478418 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  9 06:13:50 np0005478418 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  9 06:13:50 np0005478418 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  9 06:13:50 np0005478418 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  9 06:13:50 np0005478418 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  9 06:13:50 np0005478418 kernel: No NUMA configuration found
Oct  9 06:13:50 np0005478418 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  9 06:13:50 np0005478418 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct  9 06:13:50 np0005478418 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  9 06:13:50 np0005478418 kernel: Zone ranges:
Oct  9 06:13:50 np0005478418 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 06:13:50 np0005478418 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 06:13:50 np0005478418 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  9 06:13:50 np0005478418 kernel:  Device   empty
Oct  9 06:13:50 np0005478418 kernel: Movable zone start for each node
Oct  9 06:13:50 np0005478418 kernel: Early memory node ranges
Oct  9 06:13:50 np0005478418 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 06:13:50 np0005478418 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  9 06:13:50 np0005478418 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  9 06:13:50 np0005478418 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  9 06:13:50 np0005478418 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 06:13:50 np0005478418 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 06:13:50 np0005478418 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 06:13:50 np0005478418 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 06:13:50 np0005478418 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 06:13:50 np0005478418 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 06:13:50 np0005478418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 06:13:50 np0005478418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 06:13:50 np0005478418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 06:13:50 np0005478418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 06:13:50 np0005478418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 06:13:50 np0005478418 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 06:13:50 np0005478418 kernel: TSC deadline timer available
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Max. logical packages:   8
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Max. logical dies:       8
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Max. dies per package:   1
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Max. threads per core:   1
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Num. cores per package:     1
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Num. threads per package:   1
Oct  9 06:13:50 np0005478418 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  9 06:13:50 np0005478418 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 06:13:50 np0005478418 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 06:13:50 np0005478418 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  9 06:13:50 np0005478418 kernel: Booting paravirtualized kernel on KVM
Oct  9 06:13:50 np0005478418 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 06:13:50 np0005478418 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  9 06:13:50 np0005478418 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  9 06:13:50 np0005478418 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  9 06:13:50 np0005478418 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 06:13:50 np0005478418 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 06:13:50 np0005478418 kernel: random: crng init done
Oct  9 06:13:50 np0005478418 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: Fallback order for Node 0: 0 
Oct  9 06:13:50 np0005478418 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 06:13:50 np0005478418 kernel: Policy zone: Normal
Oct  9 06:13:50 np0005478418 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 06:13:50 np0005478418 kernel: software IO TLB: area num 8.
Oct  9 06:13:50 np0005478418 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  9 06:13:50 np0005478418 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 06:13:50 np0005478418 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 06:13:50 np0005478418 kernel: Dynamic Preempt: voluntary
Oct  9 06:13:50 np0005478418 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 06:13:50 np0005478418 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 06:13:50 np0005478418 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  9 06:13:50 np0005478418 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 06:13:50 np0005478418 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 06:13:50 np0005478418 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 06:13:50 np0005478418 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 06:13:50 np0005478418 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  9 06:13:50 np0005478418 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  9 06:13:50 np0005478418 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  9 06:13:50 np0005478418 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  9 06:13:50 np0005478418 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  9 06:13:50 np0005478418 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 06:13:50 np0005478418 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 06:13:50 np0005478418 kernel: Console: colour VGA+ 80x25
Oct  9 06:13:50 np0005478418 kernel: printk: console [ttyS0] enabled
Oct  9 06:13:50 np0005478418 kernel: ACPI: Core revision 20230331
Oct  9 06:13:50 np0005478418 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 06:13:50 np0005478418 kernel: x2apic enabled
Oct  9 06:13:50 np0005478418 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 06:13:50 np0005478418 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 06:13:50 np0005478418 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct  9 06:13:50 np0005478418 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 06:13:50 np0005478418 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 06:13:50 np0005478418 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 06:13:50 np0005478418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 06:13:50 np0005478418 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 06:13:50 np0005478418 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 06:13:50 np0005478418 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  9 06:13:50 np0005478418 kernel: RETBleed: Mitigation: untrained return thunk
Oct  9 06:13:50 np0005478418 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 06:13:50 np0005478418 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 06:13:50 np0005478418 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 06:13:50 np0005478418 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 06:13:50 np0005478418 kernel: x86/bugs: return thunk changed
Oct  9 06:13:50 np0005478418 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 06:13:50 np0005478418 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 06:13:50 np0005478418 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 06:13:50 np0005478418 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 06:13:50 np0005478418 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 06:13:50 np0005478418 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  9 06:13:50 np0005478418 kernel: Freeing SMP alternatives memory: 40K
Oct  9 06:13:50 np0005478418 kernel: pid_max: default: 32768 minimum: 301
Oct  9 06:13:50 np0005478418 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 06:13:50 np0005478418 kernel: landlock: Up and running.
Oct  9 06:13:50 np0005478418 kernel: Yama: becoming mindful.
Oct  9 06:13:50 np0005478418 kernel: SELinux:  Initializing.
Oct  9 06:13:50 np0005478418 kernel: LSM support for eBPF active
Oct  9 06:13:50 np0005478418 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  9 06:13:50 np0005478418 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 06:13:50 np0005478418 kernel: ... version:                0
Oct  9 06:13:50 np0005478418 kernel: ... bit width:              48
Oct  9 06:13:50 np0005478418 kernel: ... generic registers:      6
Oct  9 06:13:50 np0005478418 kernel: ... value mask:             0000ffffffffffff
Oct  9 06:13:50 np0005478418 kernel: ... max period:             00007fffffffffff
Oct  9 06:13:50 np0005478418 kernel: ... fixed-purpose events:   0
Oct  9 06:13:50 np0005478418 kernel: ... event mask:             000000000000003f
Oct  9 06:13:50 np0005478418 kernel: signal: max sigframe size: 1776
Oct  9 06:13:50 np0005478418 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 06:13:50 np0005478418 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 06:13:50 np0005478418 kernel: smp: Bringing up secondary CPUs ...
Oct  9 06:13:50 np0005478418 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 06:13:50 np0005478418 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  9 06:13:50 np0005478418 kernel: smp: Brought up 1 node, 8 CPUs
Oct  9 06:13:50 np0005478418 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct  9 06:13:50 np0005478418 kernel: node 0 deferred pages initialised in 22ms
Oct  9 06:13:50 np0005478418 kernel: Memory: 7765576K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616512K reserved, 0K cma-reserved)
Oct  9 06:13:50 np0005478418 kernel: devtmpfs: initialized
Oct  9 06:13:50 np0005478418 kernel: x86/mm: Memory block size: 128MB
Oct  9 06:13:50 np0005478418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 06:13:50 np0005478418 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 06:13:50 np0005478418 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 06:13:50 np0005478418 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 06:13:50 np0005478418 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 06:13:50 np0005478418 kernel: audit: initializing netlink subsys (disabled)
Oct  9 06:13:50 np0005478418 kernel: audit: type=2000 audit(1760004829.155:1): state=initialized audit_enabled=0 res=1
Oct  9 06:13:50 np0005478418 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 06:13:50 np0005478418 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 06:13:50 np0005478418 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 06:13:50 np0005478418 kernel: cpuidle: using governor menu
Oct  9 06:13:50 np0005478418 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 06:13:50 np0005478418 kernel: PCI: Using configuration type 1 for base access
Oct  9 06:13:50 np0005478418 kernel: PCI: Using configuration type 1 for extended access
Oct  9 06:13:50 np0005478418 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 06:13:50 np0005478418 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 06:13:50 np0005478418 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 06:13:50 np0005478418 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 06:13:50 np0005478418 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 06:13:50 np0005478418 kernel: Demotion targets for Node 0: null
Oct  9 06:13:50 np0005478418 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 06:13:50 np0005478418 kernel: ACPI: Added _OSI(Module Device)
Oct  9 06:13:50 np0005478418 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 06:13:50 np0005478418 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 06:13:50 np0005478418 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 06:13:50 np0005478418 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 06:13:50 np0005478418 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 06:13:50 np0005478418 kernel: ACPI: Interpreter enabled
Oct  9 06:13:50 np0005478418 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  9 06:13:50 np0005478418 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 06:13:50 np0005478418 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 06:13:50 np0005478418 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 06:13:50 np0005478418 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  9 06:13:50 np0005478418 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 06:13:50 np0005478418 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [3] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [4] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [5] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [6] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [7] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [8] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [9] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [10] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [11] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [12] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [13] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [14] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [15] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [16] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [17] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [18] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [19] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [20] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [21] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [22] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [23] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [24] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [25] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [26] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [27] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [28] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [29] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [30] registered
Oct  9 06:13:50 np0005478418 kernel: acpiphp: Slot [31] registered
Oct  9 06:13:50 np0005478418 kernel: PCI host bridge to bus 0000:00
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  9 06:13:50 np0005478418 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 06:13:50 np0005478418 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 06:13:50 np0005478418 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 06:13:50 np0005478418 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 06:13:50 np0005478418 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  9 06:13:50 np0005478418 kernel: iommu: Default domain type: Translated
Oct  9 06:13:50 np0005478418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 06:13:50 np0005478418 kernel: SCSI subsystem initialized
Oct  9 06:13:50 np0005478418 kernel: ACPI: bus type USB registered
Oct  9 06:13:50 np0005478418 kernel: usbcore: registered new interface driver usbfs
Oct  9 06:13:50 np0005478418 kernel: usbcore: registered new interface driver hub
Oct  9 06:13:50 np0005478418 kernel: usbcore: registered new device driver usb
Oct  9 06:13:50 np0005478418 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 06:13:50 np0005478418 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 06:13:50 np0005478418 kernel: PTP clock support registered
Oct  9 06:13:50 np0005478418 kernel: EDAC MC: Ver: 3.0.0
Oct  9 06:13:50 np0005478418 kernel: NetLabel: Initializing
Oct  9 06:13:50 np0005478418 kernel: NetLabel:  domain hash size = 128
Oct  9 06:13:50 np0005478418 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 06:13:50 np0005478418 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 06:13:50 np0005478418 kernel: PCI: Using ACPI for IRQ routing
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 06:13:50 np0005478418 kernel: vgaarb: loaded
Oct  9 06:13:50 np0005478418 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 06:13:50 np0005478418 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 06:13:50 np0005478418 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 06:13:50 np0005478418 kernel: pnp: PnP ACPI init
Oct  9 06:13:50 np0005478418 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 06:13:50 np0005478418 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_INET protocol family
Oct  9 06:13:50 np0005478418 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 06:13:50 np0005478418 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_XDP protocol family
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  9 06:13:50 np0005478418 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  9 06:13:50 np0005478418 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  9 06:13:50 np0005478418 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 90716 usecs
Oct  9 06:13:50 np0005478418 kernel: PCI: CLS 0 bytes, default 64
Oct  9 06:13:50 np0005478418 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 06:13:50 np0005478418 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  9 06:13:50 np0005478418 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 06:13:50 np0005478418 kernel: ACPI: bus type thunderbolt registered
Oct  9 06:13:50 np0005478418 kernel: Initialise system trusted keyrings
Oct  9 06:13:50 np0005478418 kernel: Key type blacklist registered
Oct  9 06:13:50 np0005478418 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 06:13:50 np0005478418 kernel: zbud: loaded
Oct  9 06:13:50 np0005478418 kernel: integrity: Platform Keyring initialized
Oct  9 06:13:50 np0005478418 kernel: integrity: Machine keyring initialized
Oct  9 06:13:50 np0005478418 kernel: Freeing initrd memory: 86104K
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_ALG protocol family
Oct  9 06:13:50 np0005478418 kernel: xor: automatically using best checksumming function   avx       
Oct  9 06:13:50 np0005478418 kernel: Key type asymmetric registered
Oct  9 06:13:50 np0005478418 kernel: Asymmetric key parser 'x509' registered
Oct  9 06:13:50 np0005478418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 06:13:50 np0005478418 kernel: io scheduler mq-deadline registered
Oct  9 06:13:50 np0005478418 kernel: io scheduler kyber registered
Oct  9 06:13:50 np0005478418 kernel: io scheduler bfq registered
Oct  9 06:13:50 np0005478418 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 06:13:50 np0005478418 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 06:13:50 np0005478418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 06:13:50 np0005478418 kernel: ACPI: button: Power Button [PWRF]
Oct  9 06:13:50 np0005478418 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  9 06:13:50 np0005478418 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  9 06:13:50 np0005478418 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  9 06:13:50 np0005478418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 06:13:50 np0005478418 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 06:13:50 np0005478418 kernel: Non-volatile memory driver v1.3
Oct  9 06:13:50 np0005478418 kernel: rdac: device handler registered
Oct  9 06:13:50 np0005478418 kernel: hp_sw: device handler registered
Oct  9 06:13:50 np0005478418 kernel: emc: device handler registered
Oct  9 06:13:50 np0005478418 kernel: alua: device handler registered
Oct  9 06:13:50 np0005478418 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  9 06:13:50 np0005478418 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  9 06:13:50 np0005478418 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  9 06:13:50 np0005478418 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  9 06:13:50 np0005478418 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 06:13:50 np0005478418 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 06:13:50 np0005478418 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 06:13:50 np0005478418 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 06:13:50 np0005478418 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  9 06:13:50 np0005478418 kernel: hub 1-0:1.0: USB hub found
Oct  9 06:13:50 np0005478418 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 06:13:50 np0005478418 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 06:13:50 np0005478418 kernel: usbserial: USB Serial support registered for generic
Oct  9 06:13:50 np0005478418 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 06:13:50 np0005478418 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 06:13:50 np0005478418 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 06:13:50 np0005478418 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 06:13:50 np0005478418 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  9 06:13:50 np0005478418 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 06:13:50 np0005478418 kernel: rtc_cmos 00:04: registered as rtc0
Oct  9 06:13:50 np0005478418 kernel: rtc_cmos 00:04: setting system clock to 2025-10-09T10:13:49 UTC (1760004829)
Oct  9 06:13:50 np0005478418 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  9 06:13:50 np0005478418 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 06:13:50 np0005478418 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 06:13:50 np0005478418 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 06:13:50 np0005478418 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 06:13:50 np0005478418 kernel: usbcore: registered new interface driver usbhid
Oct  9 06:13:50 np0005478418 kernel: usbhid: USB HID core driver
Oct  9 06:13:50 np0005478418 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 06:13:50 np0005478418 kernel: Initializing XFRM netlink socket
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_INET6 protocol family
Oct  9 06:13:50 np0005478418 kernel: Segment Routing with IPv6
Oct  9 06:13:50 np0005478418 kernel: NET: Registered PF_PACKET protocol family
Oct  9 06:13:50 np0005478418 kernel: mpls_gso: MPLS GSO support
Oct  9 06:13:50 np0005478418 kernel: IPI shorthand broadcast: enabled
Oct  9 06:13:50 np0005478418 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 06:13:50 np0005478418 kernel: AES CTR mode by8 optimization enabled
Oct  9 06:13:50 np0005478418 kernel: sched_clock: Marking stable (1208003338, 157344064)->(1502404873, -137057471)
Oct  9 06:13:50 np0005478418 kernel: registered taskstats version 1
Oct  9 06:13:50 np0005478418 kernel: Loading compiled-in X.509 certificates
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 06:13:50 np0005478418 kernel: Demotion targets for Node 0: null
Oct  9 06:13:50 np0005478418 kernel: page_owner is disabled
Oct  9 06:13:50 np0005478418 kernel: Key type .fscrypt registered
Oct  9 06:13:50 np0005478418 kernel: Key type fscrypt-provisioning registered
Oct  9 06:13:50 np0005478418 kernel: Key type big_key registered
Oct  9 06:13:50 np0005478418 kernel: Key type encrypted registered
Oct  9 06:13:50 np0005478418 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 06:13:50 np0005478418 kernel: Loading compiled-in module X.509 certificates
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 06:13:50 np0005478418 kernel: ima: Allocated hash algorithm: sha256
Oct  9 06:13:50 np0005478418 kernel: ima: No architecture policies found
Oct  9 06:13:50 np0005478418 kernel: evm: Initialising EVM extended attributes:
Oct  9 06:13:50 np0005478418 kernel: evm: security.selinux
Oct  9 06:13:50 np0005478418 kernel: evm: security.SMACK64 (disabled)
Oct  9 06:13:50 np0005478418 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 06:13:50 np0005478418 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 06:13:50 np0005478418 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 06:13:50 np0005478418 kernel: evm: security.apparmor (disabled)
Oct  9 06:13:50 np0005478418 kernel: evm: security.ima
Oct  9 06:13:50 np0005478418 kernel: evm: security.capability
Oct  9 06:13:50 np0005478418 kernel: evm: HMAC attrs: 0x1
Oct  9 06:13:50 np0005478418 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 06:13:50 np0005478418 kernel: Running certificate verification RSA selftest
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 06:13:50 np0005478418 kernel: Running certificate verification ECDSA selftest
Oct  9 06:13:50 np0005478418 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 06:13:50 np0005478418 kernel: clk: Disabling unused clocks
Oct  9 06:13:50 np0005478418 kernel: Freeing unused decrypted memory: 2028K
Oct  9 06:13:50 np0005478418 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 06:13:50 np0005478418 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 06:13:50 np0005478418 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 06:13:50 np0005478418 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 06:13:50 np0005478418 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 06:13:50 np0005478418 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 06:13:50 np0005478418 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 06:13:50 np0005478418 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  9 06:13:50 np0005478418 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 06:13:50 np0005478418 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  9 06:13:50 np0005478418 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 06:13:50 np0005478418 kernel: Run /init as init process
Oct  9 06:13:50 np0005478418 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 06:13:50 np0005478418 systemd: Detected virtualization kvm.
Oct  9 06:13:50 np0005478418 systemd: Detected architecture x86-64.
Oct  9 06:13:50 np0005478418 systemd: Running in initrd.
Oct  9 06:13:50 np0005478418 systemd: No hostname configured, using default hostname.
Oct  9 06:13:50 np0005478418 systemd: Hostname set to <localhost>.
Oct  9 06:13:50 np0005478418 systemd: Initializing machine ID from VM UUID.
Oct  9 06:13:50 np0005478418 systemd: Queued start job for default target Initrd Default Target.
Oct  9 06:13:50 np0005478418 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 06:13:50 np0005478418 systemd: Reached target Local Encrypted Volumes.
Oct  9 06:13:50 np0005478418 systemd: Reached target Initrd /usr File System.
Oct  9 06:13:50 np0005478418 systemd: Reached target Local File Systems.
Oct  9 06:13:50 np0005478418 systemd: Reached target Path Units.
Oct  9 06:13:50 np0005478418 systemd: Reached target Slice Units.
Oct  9 06:13:50 np0005478418 systemd: Reached target Swaps.
Oct  9 06:13:50 np0005478418 systemd: Reached target Timer Units.
Oct  9 06:13:50 np0005478418 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 06:13:50 np0005478418 systemd: Listening on Journal Socket (/dev/log).
Oct  9 06:13:50 np0005478418 systemd: Listening on Journal Socket.
Oct  9 06:13:50 np0005478418 systemd: Listening on udev Control Socket.
Oct  9 06:13:50 np0005478418 systemd: Listening on udev Kernel Socket.
Oct  9 06:13:50 np0005478418 systemd: Reached target Socket Units.
Oct  9 06:13:50 np0005478418 systemd: Starting Create List of Static Device Nodes...
Oct  9 06:13:50 np0005478418 systemd: Starting Journal Service...
Oct  9 06:13:50 np0005478418 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 06:13:50 np0005478418 systemd: Starting Apply Kernel Variables...
Oct  9 06:13:50 np0005478418 systemd: Starting Create System Users...
Oct  9 06:13:50 np0005478418 systemd: Starting Setup Virtual Console...
Oct  9 06:13:50 np0005478418 systemd: Finished Create List of Static Device Nodes.
Oct  9 06:13:50 np0005478418 systemd: Finished Apply Kernel Variables.
Oct  9 06:13:50 np0005478418 systemd: Finished Create System Users.
Oct  9 06:13:50 np0005478418 systemd-journald[305]: Journal started
Oct  9 06:13:50 np0005478418 systemd-journald[305]: Runtime Journal (/run/log/journal/8e0946a45a4143b5afdedf78fe78e002) is 8.0M, max 153.5M, 145.5M free.
Oct  9 06:13:50 np0005478418 systemd-sysusers[309]: Creating group 'users' with GID 100.
Oct  9 06:13:50 np0005478418 systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Oct  9 06:13:50 np0005478418 systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 06:13:50 np0005478418 systemd: Started Journal Service.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 06:13:50 np0005478418 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 06:13:50 np0005478418 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 06:13:50 np0005478418 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 06:13:50 np0005478418 systemd[1]: Finished Setup Virtual Console.
Oct  9 06:13:50 np0005478418 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting dracut cmdline hook...
Oct  9 06:13:50 np0005478418 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 06:13:50 np0005478418 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 06:13:50 np0005478418 systemd[1]: Finished dracut cmdline hook.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting dracut pre-udev hook...
Oct  9 06:13:50 np0005478418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 06:13:50 np0005478418 kernel: device-mapper: uevent: version 1.0.3
Oct  9 06:13:50 np0005478418 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 06:13:50 np0005478418 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 06:13:50 np0005478418 kernel: RPC: Registered udp transport module.
Oct  9 06:13:50 np0005478418 kernel: RPC: Registered tcp transport module.
Oct  9 06:13:50 np0005478418 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 06:13:50 np0005478418 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 06:13:50 np0005478418 rpc.statd[442]: Version 2.5.4 starting
Oct  9 06:13:50 np0005478418 rpc.statd[442]: Initializing NSM state
Oct  9 06:13:50 np0005478418 rpc.idmapd[447]: Setting log level to 0
Oct  9 06:13:50 np0005478418 systemd[1]: Finished dracut pre-udev hook.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 06:13:50 np0005478418 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 06:13:50 np0005478418 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 06:13:50 np0005478418 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 06:13:50 np0005478418 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 06:13:50 np0005478418 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 06:13:50 np0005478418 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 06:13:50 np0005478418 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 06:13:50 np0005478418 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 06:13:50 np0005478418 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 06:13:50 np0005478418 systemd[1]: Reached target Network.
Oct  9 06:13:50 np0005478418 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 06:13:50 np0005478418 systemd[1]: Starting dracut initqueue hook...
Oct  9 06:13:50 np0005478418 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  9 06:13:51 np0005478418 kernel: scsi host0: ata_piix
Oct  9 06:13:51 np0005478418 kernel: scsi host1: ata_piix
Oct  9 06:13:51 np0005478418 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  9 06:13:51 np0005478418 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  9 06:13:51 np0005478418 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 06:13:51 np0005478418 kernel: vda: vda1
Oct  9 06:13:51 np0005478418 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 06:13:51 np0005478418 kernel: ata1: found unknown device (class 0)
Oct  9 06:13:51 np0005478418 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 06:13:51 np0005478418 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 06:13:51 np0005478418 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target System Initialization.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Basic System.
Oct  9 06:13:51 np0005478418 systemd-udevd[481]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:13:51 np0005478418 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 06:13:51 np0005478418 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 06:13:51 np0005478418 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 06:13:51 np0005478418 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Initrd Root Device.
Oct  9 06:13:51 np0005478418 systemd[1]: Finished dracut initqueue hook.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Remote File Systems.
Oct  9 06:13:51 np0005478418 systemd[1]: Starting dracut pre-mount hook...
Oct  9 06:13:51 np0005478418 systemd[1]: Finished dracut pre-mount hook.
Oct  9 06:13:51 np0005478418 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 06:13:51 np0005478418 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 06:13:51 np0005478418 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 06:13:51 np0005478418 systemd[1]: Mounting /sysroot...
Oct  9 06:13:51 np0005478418 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 06:13:51 np0005478418 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 06:13:51 np0005478418 kernel: XFS (vda1): Ending clean mount
Oct  9 06:13:51 np0005478418 systemd[1]: Mounted /sysroot.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Initrd Root File System.
Oct  9 06:13:51 np0005478418 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 06:13:51 np0005478418 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 06:13:51 np0005478418 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Initrd File Systems.
Oct  9 06:13:51 np0005478418 systemd[1]: Reached target Initrd Default Target.
Oct  9 06:13:51 np0005478418 systemd[1]: Starting dracut mount hook...
Oct  9 06:13:51 np0005478418 systemd[1]: Finished dracut mount hook.
Oct  9 06:13:51 np0005478418 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 06:13:52 np0005478418 rpc.idmapd[447]: exiting on signal 15
Oct  9 06:13:52 np0005478418 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 06:13:52 np0005478418 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Network.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Timer Units.
Oct  9 06:13:52 np0005478418 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Initrd Default Target.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Basic System.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Initrd Root Device.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Path Units.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Remote File Systems.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Slice Units.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Socket Units.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target System Initialization.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Local File Systems.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Swaps.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut mount hook.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut initqueue hook.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Setup Virtual Console.
Oct  9 06:13:52 np0005478418 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Closed udev Control Socket.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Closed udev Kernel Socket.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 06:13:52 np0005478418 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped dracut cmdline hook.
Oct  9 06:13:52 np0005478418 systemd[1]: Starting Cleanup udev Database...
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 06:13:52 np0005478418 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 06:13:52 np0005478418 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Stopped Create System Users.
Oct  9 06:13:52 np0005478418 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 06:13:52 np0005478418 systemd[1]: Finished Cleanup udev Database.
Oct  9 06:13:52 np0005478418 systemd[1]: Reached target Switch Root.
Oct  9 06:13:52 np0005478418 systemd[1]: Starting Switch Root...
Oct  9 06:13:52 np0005478418 systemd[1]: Switching root.
Oct  9 06:13:52 np0005478418 systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Oct  9 06:13:52 np0005478418 systemd-journald[305]: Journal stopped
Oct  9 06:13:53 np0005478418 kernel: audit: type=1404 audit(1760004832.321:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:13:53 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:13:53 np0005478418 kernel: audit: type=1403 audit(1760004832.458:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 06:13:53 np0005478418 systemd: Successfully loaded SELinux policy in 141.603ms.
Oct  9 06:13:53 np0005478418 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.236ms.
Oct  9 06:13:53 np0005478418 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 06:13:53 np0005478418 systemd: Detected virtualization kvm.
Oct  9 06:13:53 np0005478418 systemd: Detected architecture x86-64.
Oct  9 06:13:53 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:13:53 np0005478418 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd: Stopped Switch Root.
Oct  9 06:13:53 np0005478418 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 06:13:53 np0005478418 systemd: Created slice Slice /system/getty.
Oct  9 06:13:53 np0005478418 systemd: Created slice Slice /system/serial-getty.
Oct  9 06:13:53 np0005478418 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 06:13:53 np0005478418 systemd: Created slice User and Session Slice.
Oct  9 06:13:53 np0005478418 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 06:13:53 np0005478418 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 06:13:53 np0005478418 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 06:13:53 np0005478418 systemd: Reached target Local Encrypted Volumes.
Oct  9 06:13:53 np0005478418 systemd: Stopped target Switch Root.
Oct  9 06:13:53 np0005478418 systemd: Stopped target Initrd File Systems.
Oct  9 06:13:53 np0005478418 systemd: Stopped target Initrd Root File System.
Oct  9 06:13:53 np0005478418 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 06:13:53 np0005478418 systemd: Reached target Path Units.
Oct  9 06:13:53 np0005478418 systemd: Reached target rpc_pipefs.target.
Oct  9 06:13:53 np0005478418 systemd: Reached target Slice Units.
Oct  9 06:13:53 np0005478418 systemd: Reached target Swaps.
Oct  9 06:13:53 np0005478418 systemd: Reached target Local Verity Protected Volumes.
Oct  9 06:13:53 np0005478418 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 06:13:53 np0005478418 systemd: Reached target RPC Port Mapper.
Oct  9 06:13:53 np0005478418 systemd: Listening on Process Core Dump Socket.
Oct  9 06:13:53 np0005478418 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 06:13:53 np0005478418 systemd: Listening on udev Control Socket.
Oct  9 06:13:53 np0005478418 systemd: Listening on udev Kernel Socket.
Oct  9 06:13:53 np0005478418 systemd: Mounting Huge Pages File System...
Oct  9 06:13:53 np0005478418 systemd: Mounting POSIX Message Queue File System...
Oct  9 06:13:53 np0005478418 systemd: Mounting Kernel Debug File System...
Oct  9 06:13:53 np0005478418 systemd: Mounting Kernel Trace File System...
Oct  9 06:13:53 np0005478418 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 06:13:53 np0005478418 systemd: Starting Create List of Static Device Nodes...
Oct  9 06:13:53 np0005478418 systemd: Starting Load Kernel Module configfs...
Oct  9 06:13:53 np0005478418 systemd: Starting Load Kernel Module drm...
Oct  9 06:13:53 np0005478418 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 06:13:53 np0005478418 systemd: Starting Load Kernel Module fuse...
Oct  9 06:13:53 np0005478418 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 06:13:53 np0005478418 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd: Stopped File System Check on Root Device.
Oct  9 06:13:53 np0005478418 systemd: Stopped Journal Service.
Oct  9 06:13:53 np0005478418 systemd: Starting Journal Service...
Oct  9 06:13:53 np0005478418 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 06:13:53 np0005478418 systemd: Starting Generate network units from Kernel command line...
Oct  9 06:13:53 np0005478418 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 06:13:53 np0005478418 kernel: fuse: init (API version 7.37)
Oct  9 06:13:53 np0005478418 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 06:13:53 np0005478418 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 06:13:53 np0005478418 systemd: Starting Apply Kernel Variables...
Oct  9 06:13:53 np0005478418 systemd: Starting Coldplug All udev Devices...
Oct  9 06:13:53 np0005478418 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 06:13:53 np0005478418 systemd: Mounted Huge Pages File System.
Oct  9 06:13:53 np0005478418 systemd-journald[682]: Journal started
Oct  9 06:13:53 np0005478418 systemd-journald[682]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  9 06:13:53 np0005478418 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 06:13:53 np0005478418 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd: Started Journal Service.
Oct  9 06:13:53 np0005478418 systemd[1]: Mounted POSIX Message Queue File System.
Oct  9 06:13:53 np0005478418 kernel: ACPI: bus type drm_connector registered
Oct  9 06:13:53 np0005478418 systemd[1]: Mounted Kernel Debug File System.
Oct  9 06:13:53 np0005478418 systemd[1]: Mounted Kernel Trace File System.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Create List of Static Device Nodes.
Oct  9 06:13:53 np0005478418 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 06:13:53 np0005478418 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Load Kernel Module drm.
Oct  9 06:13:53 np0005478418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 06:13:53 np0005478418 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Apply Kernel Variables.
Oct  9 06:13:53 np0005478418 systemd[1]: Mounting FUSE Control File System...
Oct  9 06:13:53 np0005478418 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Rebuild Hardware Database...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 06:13:53 np0005478418 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Create System Users...
Oct  9 06:13:53 np0005478418 systemd[1]: Mounted FUSE Control File System.
Oct  9 06:13:53 np0005478418 systemd-journald[682]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  9 06:13:53 np0005478418 systemd-journald[682]: Received client request to flush runtime journal.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 06:13:53 np0005478418 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Create System Users.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 06:13:53 np0005478418 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 06:13:53 np0005478418 systemd[1]: Reached target Local File Systems.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  9 06:13:53 np0005478418 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 06:13:53 np0005478418 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 06:13:53 np0005478418 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 06:13:53 np0005478418 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 06:13:53 np0005478418 bootctl[701]: Couldn't find EFI system partition, skipping.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Security Auditing Service...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting RPC Bind...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Rebuild Journal Catalog...
Oct  9 06:13:53 np0005478418 auditd[707]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 06:13:53 np0005478418 auditd[707]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Rebuild Journal Catalog.
Oct  9 06:13:53 np0005478418 systemd[1]: Started RPC Bind.
Oct  9 06:13:53 np0005478418 augenrules[712]: /sbin/augenrules: No change
Oct  9 06:13:53 np0005478418 augenrules[727]: No rules
Oct  9 06:13:53 np0005478418 augenrules[727]: enabled 1
Oct  9 06:13:53 np0005478418 augenrules[727]: failure 1
Oct  9 06:13:53 np0005478418 augenrules[727]: pid 707
Oct  9 06:13:53 np0005478418 augenrules[727]: rate_limit 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_limit 8192
Oct  9 06:13:53 np0005478418 augenrules[727]: lost 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog 4
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_wait_time 60000
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_wait_time_actual 0
Oct  9 06:13:53 np0005478418 augenrules[727]: enabled 1
Oct  9 06:13:53 np0005478418 augenrules[727]: failure 1
Oct  9 06:13:53 np0005478418 augenrules[727]: pid 707
Oct  9 06:13:53 np0005478418 augenrules[727]: rate_limit 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_limit 8192
Oct  9 06:13:53 np0005478418 augenrules[727]: lost 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog 2
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_wait_time 60000
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_wait_time_actual 0
Oct  9 06:13:53 np0005478418 augenrules[727]: enabled 1
Oct  9 06:13:53 np0005478418 augenrules[727]: failure 1
Oct  9 06:13:53 np0005478418 augenrules[727]: pid 707
Oct  9 06:13:53 np0005478418 augenrules[727]: rate_limit 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_limit 8192
Oct  9 06:13:53 np0005478418 augenrules[727]: lost 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog 0
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_wait_time 60000
Oct  9 06:13:53 np0005478418 augenrules[727]: backlog_wait_time_actual 0
Oct  9 06:13:53 np0005478418 systemd[1]: Started Security Auditing Service.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Rebuild Hardware Database.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Update is Completed...
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Update is Completed.
Oct  9 06:13:53 np0005478418 systemd-udevd[735]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 06:13:53 np0005478418 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 06:13:53 np0005478418 systemd[1]: Reached target System Initialization.
Oct  9 06:13:53 np0005478418 systemd[1]: Started dnf makecache --timer.
Oct  9 06:13:53 np0005478418 systemd[1]: Started Daily rotation of log files.
Oct  9 06:13:53 np0005478418 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 06:13:53 np0005478418 systemd[1]: Reached target Timer Units.
Oct  9 06:13:53 np0005478418 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 06:13:53 np0005478418 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 06:13:53 np0005478418 systemd[1]: Reached target Socket Units.
Oct  9 06:13:53 np0005478418 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 06:13:53 np0005478418 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 06:13:53 np0005478418 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 06:13:53 np0005478418 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 06:13:53 np0005478418 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 06:13:53 np0005478418 systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:13:53 np0005478418 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 06:13:53 np0005478418 systemd[1]: Started D-Bus System Message Bus.
Oct  9 06:13:53 np0005478418 systemd[1]: Reached target Basic System.
Oct  9 06:13:53 np0005478418 dbus-broker-lau[772]: Ready
Oct  9 06:13:53 np0005478418 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  9 06:13:53 np0005478418 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 06:13:53 np0005478418 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 06:13:53 np0005478418 systemd[1]: Starting NTP client/server...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 06:13:53 np0005478418 systemd[1]: Starting IPv4 firewall with iptables...
Oct  9 06:13:54 np0005478418 systemd[1]: Started irqbalance daemon.
Oct  9 06:13:54 np0005478418 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 06:13:54 np0005478418 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 06:13:54 np0005478418 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 06:13:54 np0005478418 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 06:13:54 np0005478418 systemd[1]: Reached target sshd-keygen.target.
Oct  9 06:13:54 np0005478418 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 06:13:54 np0005478418 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 06:13:54 np0005478418 chronyd[799]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 06:13:54 np0005478418 chronyd[799]: Loaded 0 symmetric keys
Oct  9 06:13:54 np0005478418 chronyd[799]: Using right/UTC timezone to obtain leap second data
Oct  9 06:13:54 np0005478418 chronyd[799]: Loaded seccomp filter (level 2)
Oct  9 06:13:54 np0005478418 systemd[1]: Starting User Login Management...
Oct  9 06:13:54 np0005478418 kernel: kvm_amd: TSC scaling supported
Oct  9 06:13:54 np0005478418 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 06:13:54 np0005478418 kernel: kvm_amd: Nested Paging enabled
Oct  9 06:13:54 np0005478418 kernel: kvm_amd: LBR virtualization supported
Oct  9 06:13:54 np0005478418 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 06:13:54 np0005478418 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 06:13:54 np0005478418 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  9 06:13:54 np0005478418 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  9 06:13:54 np0005478418 kernel: Console: switching to colour dummy device 80x25
Oct  9 06:13:54 np0005478418 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 06:13:54 np0005478418 kernel: [drm] features: -context_init
Oct  9 06:13:54 np0005478418 systemd-logind[800]: New seat seat0.
Oct  9 06:13:54 np0005478418 kernel: [drm] number of scanouts: 1
Oct  9 06:13:54 np0005478418 kernel: [drm] number of cap sets: 0
Oct  9 06:13:54 np0005478418 systemd[1]: Started User Login Management.
Oct  9 06:13:54 np0005478418 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  9 06:13:54 np0005478418 systemd[1]: Started NTP client/server.
Oct  9 06:13:54 np0005478418 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 06:13:54 np0005478418 kernel: Console: switching to colour frame buffer device 128x48
Oct  9 06:13:54 np0005478418 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 06:13:54 np0005478418 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 06:13:54 np0005478418 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 06:13:54 np0005478418 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 06:13:54 np0005478418 iptables.init[783]: iptables: Applying firewall rules: [  OK  ]
Oct  9 06:13:54 np0005478418 systemd[1]: Finished IPv4 firewall with iptables.
Oct  9 06:13:54 np0005478418 cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 10:13:54 +0000. Up 6.38 seconds.
Oct  9 06:13:54 np0005478418 systemd[1]: run-cloud\x2dinit-tmp-tmptavhxfm7.mount: Deactivated successfully.
Oct  9 06:13:55 np0005478418 systemd[1]: Starting Hostname Service...
Oct  9 06:13:55 np0005478418 systemd[1]: Started Hostname Service.
Oct  9 06:13:55 np0005478418 systemd-hostnamed[857]: Hostname set to <np0005478418.novalocal> (static)
Oct  9 06:13:55 np0005478418 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 06:13:55 np0005478418 systemd[1]: Reached target Preparation for Network.
Oct  9 06:13:55 np0005478418 systemd[1]: Starting Network Manager...
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3417] NetworkManager (version 1.54.1-1.el9) is starting... (boot:270bcca3-191a-457e-9edf-c7e43152a098)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3421] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3578] manager[0x5609e5733080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3643] hostname: hostname: using hostnamed
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3643] hostname: static hostname changed from (none) to "np0005478418.novalocal"
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3651] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3751] manager[0x5609e5733080]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3752] manager[0x5609e5733080]: rfkill: WWAN hardware radio set enabled
Oct  9 06:13:55 np0005478418 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3837] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3838] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3838] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3838] manager: Networking is enabled by state file
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3840] settings: Loaded settings plugin: keyfile (internal)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3878] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3901] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3929] dhcp: init: Using DHCP client 'internal'
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3931] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3941] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3952] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3957] device (lo): Activation: starting connection 'lo' (e9a9d37c-53bf-4ae6-939b-c44f03c26d8d)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3963] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.3965] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:13:55 np0005478418 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4013] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4020] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 06:13:55 np0005478418 systemd[1]: Started Network Manager.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4023] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4041] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4043] device (eth0): carrier: link connected
Oct  9 06:13:55 np0005478418 systemd[1]: Reached target Network.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4060] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4067] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4074] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4078] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4079] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4082] manager: NetworkManager state is now CONNECTING
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4083] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:13:55 np0005478418 systemd[1]: Starting Network Manager Wait Online...
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4108] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4111] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:13:55 np0005478418 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4175] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4186] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4212] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:13:55 np0005478418 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4261] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4262] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4271] device (lo): Activation: successful, device activated.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4277] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4278] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4281] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4283] device (eth0): Activation: successful, device activated.
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4287] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 06:13:55 np0005478418 NetworkManager[861]: <info>  [1760004835.4288] manager: startup complete
Oct  9 06:13:55 np0005478418 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 06:13:55 np0005478418 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 06:13:55 np0005478418 systemd[1]: Reached target NFS client services.
Oct  9 06:13:55 np0005478418 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 06:13:55 np0005478418 systemd[1]: Reached target Remote File Systems.
Oct  9 06:13:55 np0005478418 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 06:13:55 np0005478418 systemd[1]: Finished Network Manager Wait Online.
Oct  9 06:13:55 np0005478418 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 06:13:55 np0005478418 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 10:13:55 +0000. Up 7.42 seconds.
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.12         | 255.255.255.0 | global | fa:16:3e:01:5c:d3 |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe01:5cd3/64 |       .       |  link  | fa:16:3e:01:5c:d3 |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  9 06:13:55 np0005478418 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 06:13:57 np0005478418 cloud-init[924]: Generating public/private rsa key pair.
Oct  9 06:13:57 np0005478418 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  9 06:13:57 np0005478418 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  9 06:13:57 np0005478418 cloud-init[924]: The key fingerprint is:
Oct  9 06:13:57 np0005478418 cloud-init[924]: SHA256:HqybQ6lHA7W2BMHFSctwvoai9e4ubsuinrKKcz9LUcg root@np0005478418.novalocal
Oct  9 06:13:57 np0005478418 cloud-init[924]: The key's randomart image is:
Oct  9 06:13:57 np0005478418 cloud-init[924]: +---[RSA 3072]----+
Oct  9 06:13:57 np0005478418 cloud-init[924]: |   .o=+.         |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |   .o*+.         |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |    Eo=.         |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |    .o+o         |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |  o o+ooS        |
Oct  9 06:13:57 np0005478418 cloud-init[924]: | o o o*o .       |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |.   o+...        |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |=.=oo oo         |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |XX+**+o.         |
Oct  9 06:13:57 np0005478418 cloud-init[924]: +----[SHA256]-----+
Oct  9 06:13:57 np0005478418 cloud-init[924]: Generating public/private ecdsa key pair.
Oct  9 06:13:57 np0005478418 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  9 06:13:57 np0005478418 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  9 06:13:57 np0005478418 cloud-init[924]: The key fingerprint is:
Oct  9 06:13:57 np0005478418 cloud-init[924]: SHA256:n1xpG/h3NLEZ8o5sLEFSsqm0mA+akycyvsmOtDRFFMo root@np0005478418.novalocal
Oct  9 06:13:57 np0005478418 cloud-init[924]: The key's randomart image is:
Oct  9 06:13:57 np0005478418 cloud-init[924]: +---[ECDSA 256]---+
Oct  9 06:13:57 np0005478418 cloud-init[924]: |   o.    . .     |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |. o       =      |
Oct  9 06:13:57 np0005478418 cloud-init[924]: | E .   . + . . o |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |  .   + o o. .o =|
Oct  9 06:13:57 np0005478418 cloud-init[924]: |   . + oS ..=  =.|
Oct  9 06:13:57 np0005478418 cloud-init[924]: |  . + o  o =+oo..|
Oct  9 06:13:57 np0005478418 cloud-init[924]: | * * . .  +.o=...|
Oct  9 06:13:57 np0005478418 cloud-init[924]: |* * +       o. . |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |oB.              |
Oct  9 06:13:57 np0005478418 cloud-init[924]: +----[SHA256]-----+
Oct  9 06:13:57 np0005478418 cloud-init[924]: Generating public/private ed25519 key pair.
Oct  9 06:13:57 np0005478418 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  9 06:13:57 np0005478418 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  9 06:13:57 np0005478418 cloud-init[924]: The key fingerprint is:
Oct  9 06:13:57 np0005478418 cloud-init[924]: SHA256:bvq7JWCjCOunES4vH/sfXchSfww6i0JHHGXI5xiqGG8 root@np0005478418.novalocal
Oct  9 06:13:57 np0005478418 cloud-init[924]: The key's randomart image is:
Oct  9 06:13:57 np0005478418 cloud-init[924]: +--[ED25519 256]--+
Oct  9 06:13:57 np0005478418 cloud-init[924]: |     ..oo        |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |     .+o.        |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |     .o=. .      |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |.   ...o.+ o     |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |.= .. * S o o    |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |ooEo + B + .     |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |o+o o o * .      |
Oct  9 06:13:57 np0005478418 cloud-init[924]: |+..+ . + o       |
Oct  9 06:13:57 np0005478418 cloud-init[924]: | =*...o.+o       |
Oct  9 06:13:57 np0005478418 cloud-init[924]: +----[SHA256]-----+
Oct  9 06:13:57 np0005478418 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 06:13:57 np0005478418 systemd[1]: Reached target Cloud-config availability.
Oct  9 06:13:57 np0005478418 systemd[1]: Reached target Network is Online.
Oct  9 06:13:57 np0005478418 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 06:13:57 np0005478418 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 06:13:57 np0005478418 systemd[1]: Starting System Logging Service...
Oct  9 06:13:57 np0005478418 systemd[1]: Starting OpenSSH server daemon...
Oct  9 06:13:57 np0005478418 sm-notify[1006]: Version 2.5.4 starting
Oct  9 06:13:57 np0005478418 systemd[1]: Starting Permit User Sessions...
Oct  9 06:13:57 np0005478418 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 06:13:57 np0005478418 systemd[1]: Started OpenSSH server daemon.
Oct  9 06:13:57 np0005478418 systemd[1]: Finished Permit User Sessions.
Oct  9 06:13:57 np0005478418 systemd[1]: Started Command Scheduler.
Oct  9 06:13:57 np0005478418 systemd[1]: Started Getty on tty1.
Oct  9 06:13:57 np0005478418 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 06:13:57 np0005478418 systemd[1]: Reached target Login Prompts.
Oct  9 06:13:57 np0005478418 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Oct  9 06:13:57 np0005478418 systemd[1]: Started System Logging Service.
Oct  9 06:13:57 np0005478418 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  9 06:13:57 np0005478418 systemd[1]: Reached target Multi-User System.
Oct  9 06:13:57 np0005478418 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 06:13:57 np0005478418 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 06:13:57 np0005478418 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 06:13:57 np0005478418 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 06:13:58 np0005478418 cloud-init[1038]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 10:13:57 +0000. Up 9.64 seconds.
Oct  9 06:13:58 np0005478418 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 06:13:58 np0005478418 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 06:13:58 np0005478418 cloud-init[1042]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 10:13:58 +0000. Up 10.01 seconds.
Oct  9 06:13:58 np0005478418 cloud-init[1044]: #############################################################
Oct  9 06:13:58 np0005478418 cloud-init[1045]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  9 06:13:58 np0005478418 cloud-init[1047]: 256 SHA256:n1xpG/h3NLEZ8o5sLEFSsqm0mA+akycyvsmOtDRFFMo root@np0005478418.novalocal (ECDSA)
Oct  9 06:13:58 np0005478418 cloud-init[1049]: 256 SHA256:bvq7JWCjCOunES4vH/sfXchSfww6i0JHHGXI5xiqGG8 root@np0005478418.novalocal (ED25519)
Oct  9 06:13:58 np0005478418 cloud-init[1051]: 3072 SHA256:HqybQ6lHA7W2BMHFSctwvoai9e4ubsuinrKKcz9LUcg root@np0005478418.novalocal (RSA)
Oct  9 06:13:58 np0005478418 cloud-init[1052]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  9 06:13:58 np0005478418 cloud-init[1053]: #############################################################
Oct  9 06:13:58 np0005478418 cloud-init[1042]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 10:13:58 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.17 seconds
Oct  9 06:13:58 np0005478418 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 06:13:58 np0005478418 systemd[1]: Reached target Cloud-init target.
Oct  9 06:13:58 np0005478418 systemd[1]: Startup finished in 1.589s (kernel) + 2.388s (initrd) + 6.256s (userspace) = 10.234s.
Oct  9 06:14:00 np0005478418 chronyd[799]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Oct  9 06:14:00 np0005478418 chronyd[799]: System clock TAI offset set to 37 seconds
Oct  9 06:14:04 np0005478418 irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Oct  9 06:14:04 np0005478418 irqbalance[791]: IRQ 25 affinity is now unmanaged
Oct  9 06:14:04 np0005478418 irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  9 06:14:04 np0005478418 irqbalance[791]: IRQ 31 affinity is now unmanaged
Oct  9 06:14:04 np0005478418 irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  9 06:14:04 np0005478418 irqbalance[791]: IRQ 28 affinity is now unmanaged
Oct  9 06:14:04 np0005478418 irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  9 06:14:04 np0005478418 irqbalance[791]: IRQ 32 affinity is now unmanaged
Oct  9 06:14:04 np0005478418 irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  9 06:14:04 np0005478418 irqbalance[791]: IRQ 30 affinity is now unmanaged
Oct  9 06:14:04 np0005478418 irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  9 06:14:04 np0005478418 irqbalance[791]: IRQ 29 affinity is now unmanaged
Oct  9 06:14:05 np0005478418 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 06:14:13 np0005478418 systemd-logind[800]: New session 1 of user zuul.
Oct  9 06:14:13 np0005478418 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 06:14:13 np0005478418 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 06:14:13 np0005478418 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 06:14:13 np0005478418 systemd[1]: Starting User Manager for UID 1000...
Oct  9 06:14:13 np0005478418 systemd[1061]: Queued start job for default target Main User Target.
Oct  9 06:14:13 np0005478418 systemd[1061]: Created slice User Application Slice.
Oct  9 06:14:13 np0005478418 systemd[1061]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 06:14:13 np0005478418 systemd[1061]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 06:14:13 np0005478418 systemd[1061]: Reached target Paths.
Oct  9 06:14:13 np0005478418 systemd[1061]: Reached target Timers.
Oct  9 06:14:13 np0005478418 systemd[1061]: Starting D-Bus User Message Bus Socket...
Oct  9 06:14:13 np0005478418 systemd[1061]: Starting Create User's Volatile Files and Directories...
Oct  9 06:14:13 np0005478418 systemd[1061]: Finished Create User's Volatile Files and Directories.
Oct  9 06:14:13 np0005478418 systemd[1061]: Listening on D-Bus User Message Bus Socket.
Oct  9 06:14:13 np0005478418 systemd[1061]: Reached target Sockets.
Oct  9 06:14:13 np0005478418 systemd[1061]: Reached target Basic System.
Oct  9 06:14:13 np0005478418 systemd[1061]: Reached target Main User Target.
Oct  9 06:14:13 np0005478418 systemd[1061]: Startup finished in 136ms.
Oct  9 06:14:13 np0005478418 systemd[1]: Started User Manager for UID 1000.
Oct  9 06:14:13 np0005478418 systemd[1]: Started Session 1 of User zuul.
Oct  9 06:14:14 np0005478418 python3[1143]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:14:17 np0005478418 python3[1171]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:14:25 np0005478418 python3[1229]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:14:25 np0005478418 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 06:14:26 np0005478418 python3[1271]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  9 06:14:28 np0005478418 python3[1297]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKrWuDJ5xTespcHwQ2TaaZ3Zk8EaZpCUc4NYySGvmHO5jpevNRcynvvzZP2235qfpytQBZWu/Z6Xb0vFB1tbHIX3W9l+eufJVyGysUs55Pa5jUlZDQxhR26hWFu2QTteKFQfX1nvFt7R/YN+Mh5dagla2ZHOZr6LBu5HE/bNF/0PglxRmIUCPeEhjnfrSGG9CdpB+yoKWmd8AguE6r9TXU+r/8cI5loiqrFdl5jNeWZumJUQOTKUniHhGrY2am/CNWKfJkcJzIZOjH3206u91OJkq73eNJ+7Z3p/6lnXTvVKSKuxr5rchpaFK0rsyHqksQ49Rdccj8dvbkCIZOWyjtD5t0vx4KzEhkXanY8DUV73rw2pyOXq9pkgqyQ3fSU65BP0P8Jbcy4xrm13WebffTKf7+GzuTPsncbhmT6Gi1SmiWAUTVCn+6SpTUOyeNzlp4Q5DciVrJ8mcYQMi6EpA9D2EWiDw7flcheiMXAnEXUYB9IXve/BJAfKDsFkBokE8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:29 np0005478418 python3[1321]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:29 np0005478418 python3[1420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:29 np0005478418 python3[1491]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760004869.2514296-251-172860137487498/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3225e00d67a2490db4290fbe79763094_id_rsa follow=False checksum=0e870f40d8be952bfeabf3a2d60ada1198f555ed backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:30 np0005478418 python3[1614]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:30 np0005478418 python3[1685]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760004870.2137785-306-265062856312115/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3225e00d67a2490db4290fbe79763094_id_rsa.pub follow=False checksum=8e0e394660ae2f6e6911c5639dd6b05de78756d1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:32 np0005478418 python3[1733]: ansible-ping Invoked with data=pong
Oct  9 06:14:33 np0005478418 python3[1757]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:14:35 np0005478418 python3[1815]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  9 06:14:36 np0005478418 python3[1847]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:36 np0005478418 python3[1871]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:36 np0005478418 python3[1895]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:37 np0005478418 python3[1919]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:37 np0005478418 python3[1943]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:37 np0005478418 python3[1967]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:39 np0005478418 python3[1993]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:40 np0005478418 python3[2071]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:40 np0005478418 python3[2144]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760004879.9304526-31-11681795800676/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:41 np0005478418 python3[2192]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:41 np0005478418 python3[2216]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:42 np0005478418 python3[2240]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:42 np0005478418 python3[2264]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:42 np0005478418 python3[2288]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:42 np0005478418 python3[2312]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:43 np0005478418 python3[2336]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:43 np0005478418 python3[2360]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:43 np0005478418 python3[2384]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:43 np0005478418 python3[2408]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:44 np0005478418 python3[2432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:44 np0005478418 python3[2456]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:44 np0005478418 python3[2480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:45 np0005478418 python3[2504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:45 np0005478418 python3[2528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:45 np0005478418 python3[2552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:45 np0005478418 python3[2576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:46 np0005478418 python3[2600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:46 np0005478418 python3[2624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:46 np0005478418 python3[2648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:47 np0005478418 python3[2672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:47 np0005478418 python3[2696]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:47 np0005478418 python3[2720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:47 np0005478418 python3[2744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:48 np0005478418 python3[2768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:48 np0005478418 python3[2792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:14:51 np0005478418 python3[2818]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 06:14:51 np0005478418 systemd[1]: Starting Time & Date Service...
Oct  9 06:14:51 np0005478418 systemd[1]: Started Time & Date Service.
Oct  9 06:14:51 np0005478418 systemd-timedated[2820]: Changed time zone to 'UTC' (UTC).
Oct  9 06:14:52 np0005478418 python3[2849]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:52 np0005478418 python3[2925]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:53 np0005478418 python3[2996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760004892.4535456-251-115026025726758/source _original_basename=tmp3pz3_um4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:53 np0005478418 python3[3096]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:53 np0005478418 python3[3167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760004893.284256-301-9015699235675/source _original_basename=tmpsujbe7s5 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:54 np0005478418 python3[3269]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:55 np0005478418 python3[3342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760004894.5332904-381-225186881660787/source _original_basename=tmpqboko7e7 follow=False checksum=bee07c3642df91d0fc882b8d0517473ba529622f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:55 np0005478418 python3[3390]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:14:55 np0005478418 python3[3416]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:14:56 np0005478418 python3[3496]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:14:56 np0005478418 python3[3569]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760004896.1628282-451-176472209205315/source _original_basename=tmpp1kdnb22 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:14:57 np0005478418 python3[3620]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-63a5-7a45-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:14:58 np0005478418 python3[3648]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-63a5-7a45-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  9 06:14:59 np0005478418 python3[3676]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:15:17 np0005478418 python3[3702]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:15:21 np0005478418 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  9 06:15:56 np0005478418 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  9 06:15:56 np0005478418 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  9 06:15:56 np0005478418 NetworkManager[861]: <info>  [1760004956.9852] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 06:15:56 np0005478418 systemd-udevd[3706]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0065] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0085] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0088] device (eth1): carrier: link connected
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0089] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0093] policy: auto-activating connection 'Wired connection 1' (d9ec873a-c659-38cd-905a-e5b4323f1c1d)
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0096] device (eth1): Activation: starting connection 'Wired connection 1' (d9ec873a-c659-38cd-905a-e5b4323f1c1d)
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0096] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0098] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0100] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:15:57 np0005478418 NetworkManager[861]: <info>  [1760004957.0103] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:15:58 np0005478418 python3[3732]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-e9a6-dcaa-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:16:08 np0005478418 python3[3812]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:16:08 np0005478418 python3[3885]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760004967.8475583-104-41373263397737/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=862a50ce9afb0d582b6842adcb08dddd4f5800de backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:16:09 np0005478418 python3[3935]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:16:09 np0005478418 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 06:16:09 np0005478418 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 06:16:09 np0005478418 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 06:16:09 np0005478418 systemd[1]: Stopping Network Manager...
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5376] caught SIGTERM, shutting down normally.
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5391] dhcp4 (eth0): canceled DHCP transaction
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5392] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5392] dhcp4 (eth0): state changed no lease
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5396] manager: NetworkManager state is now CONNECTING
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5501] dhcp4 (eth1): canceled DHCP transaction
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5502] dhcp4 (eth1): state changed no lease
Oct  9 06:16:09 np0005478418 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 06:16:09 np0005478418 NetworkManager[861]: <info>  [1760004969.5594] exiting (success)
Oct  9 06:16:09 np0005478418 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 06:16:09 np0005478418 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 06:16:09 np0005478418 systemd[1]: Stopped Network Manager.
Oct  9 06:16:09 np0005478418 systemd[1]: Starting Network Manager...
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.6334] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:270bcca3-191a-457e-9edf-c7e43152a098)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.6338] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.6405] manager[0x55ca83385070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 06:16:09 np0005478418 systemd[1]: Starting Hostname Service...
Oct  9 06:16:09 np0005478418 systemd[1]: Started Hostname Service.
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7133] hostname: hostname: using hostnamed
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7134] hostname: static hostname changed from (none) to "np0005478418.novalocal"
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7139] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7144] manager[0x55ca83385070]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7145] manager[0x55ca83385070]: rfkill: WWAN hardware radio set enabled
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7179] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7180] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7180] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7181] manager: Networking is enabled by state file
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7184] settings: Loaded settings plugin: keyfile (internal)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7187] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7214] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7224] dhcp: init: Using DHCP client 'internal'
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7227] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7235] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7243] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7254] device (lo): Activation: starting connection 'lo' (e9a9d37c-53bf-4ae6-939b-c44f03c26d8d)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7263] device (eth0): carrier: link connected
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7268] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7274] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7274] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7283] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7290] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7297] device (eth1): carrier: link connected
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7302] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7307] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d9ec873a-c659-38cd-905a-e5b4323f1c1d) (indicated)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7307] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7313] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7321] device (eth1): Activation: starting connection 'Wired connection 1' (d9ec873a-c659-38cd-905a-e5b4323f1c1d)
Oct  9 06:16:09 np0005478418 systemd[1]: Started Network Manager.
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7327] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7332] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7334] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7336] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7338] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7342] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7344] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7346] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7349] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7357] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7359] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7369] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7372] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7392] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7394] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7399] device (lo): Activation: successful, device activated.
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7407] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7415] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 06:16:09 np0005478418 systemd[1]: Starting Network Manager Wait Online...
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7556] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7574] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7576] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7579] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7583] device (eth0): Activation: successful, device activated.
Oct  9 06:16:09 np0005478418 NetworkManager[3947]: <info>  [1760004969.7587] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 06:16:10 np0005478418 python3[4019]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-e9a6-dcaa-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:16:19 np0005478418 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 06:16:26 np0005478418 systemd[1061]: Starting Mark boot as successful...
Oct  9 06:16:26 np0005478418 systemd[1061]: Finished Mark boot as successful.
Oct  9 06:16:39 np0005478418 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3325] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 06:16:55 np0005478418 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 06:16:55 np0005478418 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3608] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3611] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3621] device (eth1): Activation: successful, device activated.
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3628] manager: startup complete
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3629] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <warn>  [1760005015.3638] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3645] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 systemd[1]: Finished Network Manager Wait Online.
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3861] dhcp4 (eth1): canceled DHCP transaction
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3861] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3861] dhcp4 (eth1): state changed no lease
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3876] policy: auto-activating connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3880] device (eth1): Activation: starting connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3881] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3885] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3891] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3899] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3935] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3938] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:16:55 np0005478418 NetworkManager[3947]: <info>  [1760005015.3946] device (eth1): Activation: successful, device activated.
Oct  9 06:17:05 np0005478418 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 06:17:10 np0005478418 systemd-logind[800]: Session 1 logged out. Waiting for processes to exit.
Oct  9 06:18:14 np0005478418 systemd-logind[800]: New session 3 of user zuul.
Oct  9 06:18:14 np0005478418 systemd[1]: Started Session 3 of User zuul.
Oct  9 06:18:14 np0005478418 python3[4129]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:18:14 np0005478418 python3[4202]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760005094.1171696-373-278668689231388/source _original_basename=tmpmc9nhklp follow=False checksum=bc3809a74635f6eb8c3d20002e9281bdb7564239 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:18:18 np0005478418 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 06:18:18 np0005478418 systemd-logind[800]: Session 3 logged out. Waiting for processes to exit.
Oct  9 06:18:18 np0005478418 systemd-logind[800]: Removed session 3.
Oct  9 06:19:26 np0005478418 systemd[1061]: Created slice User Background Tasks Slice.
Oct  9 06:19:26 np0005478418 systemd[1061]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 06:19:26 np0005478418 systemd[1061]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 06:25:44 np0005478418 systemd-logind[800]: New session 4 of user zuul.
Oct  9 06:25:44 np0005478418 systemd[1]: Started Session 4 of User zuul.
Oct  9 06:25:45 np0005478418 python3[4264]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-5cc3-966a-000000001d00-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:25:45 np0005478418 python3[4293]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:25:46 np0005478418 python3[4319]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:25:46 np0005478418 python3[4345]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:25:46 np0005478418 python3[4371]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:25:47 np0005478418 python3[4397]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:25:47 np0005478418 python3[4397]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  9 06:25:48 np0005478418 python3[4423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 06:25:48 np0005478418 systemd[1]: Reloading.
Oct  9 06:25:48 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:25:50 np0005478418 python3[4479]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  9 06:25:51 np0005478418 python3[4505]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:25:51 np0005478418 python3[4533]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:25:51 np0005478418 python3[4561]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:25:52 np0005478418 python3[4589]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:25:52 np0005478418 python3[4616]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-5cc3-966a-000000001d06-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:25:53 np0005478418 python3[4646]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:25:56 np0005478418 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 06:25:56 np0005478418 systemd[1]: session-4.scope: Consumed 3.196s CPU time.
Oct  9 06:25:56 np0005478418 systemd-logind[800]: Session 4 logged out. Waiting for processes to exit.
Oct  9 06:25:56 np0005478418 systemd-logind[800]: Removed session 4.
Oct  9 06:25:58 np0005478418 systemd-logind[800]: New session 5 of user zuul.
Oct  9 06:25:58 np0005478418 systemd[1]: Started Session 5 of User zuul.
Oct  9 06:25:58 np0005478418 python3[4679]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 06:26:27 np0005478418 kernel: SELinux:  Converting 363 SID table entries...
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:26:27 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:26:36 np0005478418 kernel: SELinux:  Converting 363 SID table entries...
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:26:36 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:26:45 np0005478418 kernel: SELinux:  Converting 363 SID table entries...
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:26:45 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:26:47 np0005478418 setsebool[4765]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  9 06:26:47 np0005478418 setsebool[4765]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  9 06:26:58 np0005478418 kernel: SELinux:  Converting 366 SID table entries...
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:26:58 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:27:16 np0005478418 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 06:27:16 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:27:16 np0005478418 systemd[1]: Starting man-db-cache-update.service...
Oct  9 06:27:16 np0005478418 systemd[1]: Reloading.
Oct  9 06:27:16 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:27:16 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:27:17 np0005478418 systemd[1]: Starting PackageKit Daemon...
Oct  9 06:27:17 np0005478418 systemd[1]: Starting Authorization Manager...
Oct  9 06:27:17 np0005478418 polkitd[6241]: Started polkitd version 0.117
Oct  9 06:27:17 np0005478418 systemd[1]: Started Authorization Manager.
Oct  9 06:27:17 np0005478418 systemd[1]: Started PackageKit Daemon.
Oct  9 06:27:54 np0005478418 irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Oct  9 06:27:54 np0005478418 irqbalance[791]: IRQ 27 affinity is now unmanaged
Oct  9 06:28:02 np0005478418 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 06:28:02 np0005478418 systemd[1]: Finished man-db-cache-update.service.
Oct  9 06:28:02 np0005478418 systemd[1]: man-db-cache-update.service: Consumed 52.897s CPU time.
Oct  9 06:28:02 np0005478418 systemd[1]: run-r403b78db1b32474984025e736d8dba84.service: Deactivated successfully.
Oct  9 06:28:10 np0005478418 python3[26158]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-445b-4abe-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:28:11 np0005478418 kernel: evm: overlay not supported
Oct  9 06:28:11 np0005478418 systemd[1061]: Starting D-Bus User Message Bus...
Oct  9 06:28:11 np0005478418 dbus-broker-launch[26216]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 06:28:11 np0005478418 dbus-broker-launch[26216]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 06:28:11 np0005478418 systemd[1061]: Started D-Bus User Message Bus.
Oct  9 06:28:11 np0005478418 dbus-broker-lau[26216]: Ready
Oct  9 06:28:11 np0005478418 systemd[1061]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 06:28:11 np0005478418 systemd[1061]: Created slice Slice /user.
Oct  9 06:28:11 np0005478418 systemd[1061]: podman-26197.scope: unit configures an IP firewall, but not running as root.
Oct  9 06:28:11 np0005478418 systemd[1061]: (This warning is only shown for the first unit using IP firewalling.)
Oct  9 06:28:11 np0005478418 systemd[1061]: Started podman-26197.scope.
Oct  9 06:28:12 np0005478418 systemd[1061]: Started podman-pause-aa57faf4.scope.
Oct  9 06:28:13 np0005478418 python3[26244]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.38:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.38:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:28:14 np0005478418 systemd[1]: session-5.scope: Deactivated successfully.
Oct  9 06:28:14 np0005478418 systemd[1]: session-5.scope: Consumed 1min 9.198s CPU time.
Oct  9 06:28:14 np0005478418 systemd-logind[800]: Session 5 logged out. Waiting for processes to exit.
Oct  9 06:28:14 np0005478418 systemd-logind[800]: Removed session 5.
Oct  9 06:28:45 np0005478418 systemd-logind[800]: New session 6 of user zuul.
Oct  9 06:28:45 np0005478418 systemd[1]: Started Session 6 of User zuul.
Oct  9 06:28:45 np0005478418 python3[26282]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFPFZXUzX58R25zDan7DNWDV+4xfIWci+fEWl7QL95luQBdAv6qp1uV6urvFvWypY653R852OwnUICLe/ebuLAg= zuul@np0005478417.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:28:46 np0005478418 python3[26308]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFPFZXUzX58R25zDan7DNWDV+4xfIWci+fEWl7QL95luQBdAv6qp1uV6urvFvWypY653R852OwnUICLe/ebuLAg= zuul@np0005478417.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:28:46 np0005478418 python3[26334]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005478418.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  9 06:28:48 np0005478418 python3[26368]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFPFZXUzX58R25zDan7DNWDV+4xfIWci+fEWl7QL95luQBdAv6qp1uV6urvFvWypY653R852OwnUICLe/ebuLAg= zuul@np0005478417.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 06:28:48 np0005478418 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 06:28:48 np0005478418 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 06:28:48 np0005478418 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 06:28:48 np0005478418 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 06:28:48 np0005478418 python3[26447]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:28:49 np0005478418 python3[26522]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760005728.6401842-167-8268333290742/source _original_basename=tmpyissijao follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:28:50 np0005478418 python3[26572]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  9 06:28:50 np0005478418 systemd[1]: Starting Hostname Service...
Oct  9 06:28:50 np0005478418 systemd[1]: Started Hostname Service.
Oct  9 06:28:50 np0005478418 systemd-hostnamed[26576]: Changed pretty hostname to 'compute-0'
Oct  9 06:28:50 np0005478418 systemd-hostnamed[26576]: Hostname set to <compute-0> (static)
Oct  9 06:28:50 np0005478418 NetworkManager[3947]: <info>  [1760005730.3121] hostname: static hostname changed from "np0005478418.novalocal" to "compute-0"
Oct  9 06:28:50 np0005478418 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 06:28:50 np0005478418 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 06:28:50 np0005478418 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 06:28:50 np0005478418 systemd[1]: session-6.scope: Consumed 2.054s CPU time.
Oct  9 06:28:50 np0005478418 systemd-logind[800]: Session 6 logged out. Waiting for processes to exit.
Oct  9 06:28:50 np0005478418 systemd-logind[800]: Removed session 6.
Oct  9 06:29:00 np0005478418 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 06:29:20 np0005478418 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 06:32:23 np0005478418 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 06:33:38 np0005478418 systemd-logind[800]: New session 7 of user zuul.
Oct  9 06:33:38 np0005478418 systemd[1]: Started Session 7 of User zuul.
Oct  9 06:33:39 np0005478418 python3[26673]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:33:40 np0005478418 python3[26789]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:41 np0005478418 python3[26862]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:41 np0005478418 python3[26888]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:41 np0005478418 python3[26961]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:42 np0005478418 python3[26987]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:42 np0005478418 python3[27060]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:42 np0005478418 python3[27086]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:43 np0005478418 python3[27159]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:43 np0005478418 python3[27185]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:43 np0005478418 python3[27258]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:43 np0005478418 python3[27284]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:44 np0005478418 python3[27357]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:44 np0005478418 python3[27383]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 06:33:44 np0005478418 python3[27456]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760006020.5346847-30635-265734403225368/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:33:56 np0005478418 python3[27514]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:38:56 np0005478418 systemd[1]: session-7.scope: Deactivated successfully.
Oct  9 06:38:56 np0005478418 systemd[1]: session-7.scope: Consumed 4.807s CPU time.
Oct  9 06:38:56 np0005478418 systemd-logind[800]: Session 7 logged out. Waiting for processes to exit.
Oct  9 06:38:56 np0005478418 systemd-logind[800]: Removed session 7.
Oct  9 06:46:35 np0005478418 systemd-logind[800]: New session 8 of user zuul.
Oct  9 06:46:35 np0005478418 systemd[1]: Started Session 8 of User zuul.
Oct  9 06:46:36 np0005478418 python3.9[27678]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:46:37 np0005478418 python3.9[27859]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:46:46 np0005478418 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 06:46:46 np0005478418 systemd[1]: session-8.scope: Consumed 7.482s CPU time.
Oct  9 06:46:46 np0005478418 systemd-logind[800]: Session 8 logged out. Waiting for processes to exit.
Oct  9 06:46:46 np0005478418 systemd-logind[800]: Removed session 8.
Oct  9 06:47:03 np0005478418 systemd-logind[800]: New session 9 of user zuul.
Oct  9 06:47:03 np0005478418 systemd[1]: Started Session 9 of User zuul.
Oct  9 06:47:04 np0005478418 python3.9[28070]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 06:47:05 np0005478418 python3.9[28244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:47:06 np0005478418 python3.9[28396]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:47:07 np0005478418 python3.9[28549]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:47:08 np0005478418 python3.9[28701]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:47:09 np0005478418 python3.9[28853]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:47:09 np0005478418 python3.9[28976]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760006828.7026114-177-103723732078101/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:47:10 np0005478418 python3.9[29128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:47:11 np0005478418 python3.9[29284]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:47:12 np0005478418 python3.9[29434]: ansible-ansible.builtin.service_facts Invoked
Oct  9 06:47:17 np0005478418 python3.9[29689]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:47:18 np0005478418 python3.9[29839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:47:20 np0005478418 python3.9[29993]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:47:21 np0005478418 python3.9[30151]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:47:22 np0005478418 python3.9[30235]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:48:05 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:48:05 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:48:06 np0005478418 dbus-broker-launch[26216]: Noticed file-system modification, trigger reload.
Oct  9 06:48:06 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:48:06 np0005478418 dbus-broker-launch[26216]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 06:48:06 np0005478418 dbus-broker-launch[26216]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 06:48:06 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:48:06 np0005478418 systemd[1]: Reexecuting.
Oct  9 06:48:06 np0005478418 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 06:48:06 np0005478418 systemd: Detected virtualization kvm.
Oct  9 06:48:06 np0005478418 systemd: Detected architecture x86-64.
Oct  9 06:48:06 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:48:06 np0005478418 systemd[1]: Reloading.
Oct  9 06:48:06 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:48:06 np0005478418 systemd[1]: Starting dnf makecache...
Oct  9 06:48:06 np0005478418 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  9 06:48:07 np0005478418 dnf[30512]: Failed determining last makecache time.
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-barbican-42b4c41831408a8e323 135 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 186 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-cinder-1c00d6490d88e436f26ef 166 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-python-stevedore-c4acc5639fd2329372142 183 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-python-cloudkitty-tests-tempest-3961dc 167 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-diskimage-builder-43381184423c185801b5 185 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 183 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-python-designate-tests-tempest-347fdbc 184 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-glance-1fd12c29b339f30fe823e 190 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 185 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-manila-3c01b7181572c95dac462 201 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-python-vmware-nsxlib-458234972d1428ac9 196 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-octavia-ba397f07a7331190208c 188 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-watcher-c014f81a8647287f6dcc 190 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-edpm-image-builder-55ba53cf215b14ed95b 186 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 195 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-swift-dc98a8463506ac520c469a 167 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-python-tempestconf-8515371b7cceebd4282 189 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: delorean-openstack-heat-ui-013accbfd179753bc3f0 200 kB/s | 3.0 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: CentOS Stream 9 - BaseOS                         22 kB/s | 6.1 kB     00:00
Oct  9 06:48:07 np0005478418 dnf[30512]: CentOS Stream 9 - AppStream                      57 kB/s | 6.5 kB     00:00
Oct  9 06:48:08 np0005478418 systemd[1]: Reloading.
Oct  9 06:48:08 np0005478418 dnf[30512]: CentOS Stream 9 - CRB                            58 kB/s | 6.0 kB     00:00
Oct  9 06:48:08 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:48:08 np0005478418 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 06:48:08 np0005478418 dnf[30512]: CentOS Stream 9 - Extras packages                49 kB/s | 8.0 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: dlrn-antelope-testing                           161 kB/s | 3.0 kB     00:00
Oct  9 06:48:08 np0005478418 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 06:48:08 np0005478418 dnf[30512]: dlrn-antelope-build-deps                        172 kB/s | 3.0 kB     00:00
Oct  9 06:48:08 np0005478418 systemd[1]: Reloading.
Oct  9 06:48:08 np0005478418 dnf[30512]: centos9-rabbitmq                                 96 kB/s | 3.0 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: centos9-storage                                 122 kB/s | 3.0 kB     00:00
Oct  9 06:48:08 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:48:08 np0005478418 dnf[30512]: centos9-opstools                                135 kB/s | 3.0 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: NFV SIG OpenvSwitch                             137 kB/s | 3.0 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: repo-setup-centos-appstream                     207 kB/s | 4.4 kB     00:00
Oct  9 06:48:08 np0005478418 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  9 06:48:08 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:48:08 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:48:08 np0005478418 dnf[30512]: repo-setup-centos-baseos                        173 kB/s | 3.9 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: repo-setup-centos-highavailability              150 kB/s | 3.9 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Oct  9 06:48:08 np0005478418 dnf[30512]: Extra Packages for Enterprise Linux 9 - x86_64  234 kB/s |  30 kB     00:00
Oct  9 06:48:09 np0005478418 dnf[30512]: Metadata cache created.
Oct  9 06:48:09 np0005478418 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 06:48:09 np0005478418 systemd[1]: Finished dnf makecache.
Oct  9 06:48:09 np0005478418 systemd[1]: dnf-makecache.service: Consumed 1.612s CPU time.
Oct  9 06:49:10 np0005478418 kernel: SELinux:  Converting 2714 SID table entries...
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:49:10 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:49:10 np0005478418 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  9 06:49:10 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:49:10 np0005478418 systemd[1]: Starting man-db-cache-update.service...
Oct  9 06:49:10 np0005478418 systemd[1]: Reloading.
Oct  9 06:49:10 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:49:10 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:49:10 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:49:10 np0005478418 systemd-journald[682]: Journal stopped
Oct  9 06:49:10 np0005478418 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  9 06:49:10 np0005478418 systemd: Stopping Journal Service...
Oct  9 06:49:10 np0005478418 systemd: Stopping Rule-based Manager for Device Events and Files...
Oct  9 06:49:10 np0005478418 systemd: systemd-journald.service: Deactivated successfully.
Oct  9 06:49:10 np0005478418 systemd: Stopped Journal Service.
Oct  9 06:49:10 np0005478418 systemd: Starting Journal Service...
Oct  9 06:49:10 np0005478418 systemd: systemd-udevd.service: Deactivated successfully.
Oct  9 06:49:10 np0005478418 systemd: Stopped Rule-based Manager for Device Events and Files.
Oct  9 06:49:10 np0005478418 systemd: systemd-udevd.service: Consumed 2.194s CPU time.
Oct  9 06:49:10 np0005478418 systemd: Starting Rule-based Manager for Device Events and Files...
Oct  9 06:49:10 np0005478418 systemd-journald[30988]: Journal started
Oct  9 06:49:10 np0005478418 systemd-journald[30988]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  9 06:49:10 np0005478418 systemd: Started Journal Service.
Oct  9 06:49:10 np0005478418 systemd-udevd[30996]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 06:49:10 np0005478418 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 06:49:11 np0005478418 systemd[1]: Reloading.
Oct  9 06:49:11 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:49:11 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:49:12 np0005478418 systemd[1]: Starting PackageKit Daemon...
Oct  9 06:49:12 np0005478418 systemd[1]: Started PackageKit Daemon.
Oct  9 06:49:18 np0005478418 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 06:49:18 np0005478418 systemd[1]: Finished man-db-cache-update.service.
Oct  9 06:49:18 np0005478418 systemd[1]: man-db-cache-update.service: Consumed 10.239s CPU time.
Oct  9 06:49:18 np0005478418 systemd[1]: run-raab652f4ba974daba9ec0efc23189e1c.service: Deactivated successfully.
Oct  9 06:49:18 np0005478418 systemd[1]: run-r82eb969a42554941a222b25543f572ae.service: Deactivated successfully.
Oct  9 06:49:29 np0005478418 python3.9[38820]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:49:32 np0005478418 python3.9[39101]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 06:49:33 np0005478418 python3.9[39253]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 06:49:35 np0005478418 python3.9[39407]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:49:38 np0005478418 python3.9[39559]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 06:49:39 np0005478418 python3.9[39711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:49:43 np0005478418 python3.9[39863]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:49:43 np0005478418 python3.9[39987]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760006979.899519-639-201880699713850/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9f1ff82491fe3b5e7873c68de7b435e722b58b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:49:45 np0005478418 python3.9[40139]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 06:49:46 np0005478418 python3.9[40292]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 06:49:46 np0005478418 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 06:49:47 np0005478418 python3.9[40451]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 06:49:48 np0005478418 python3.9[40611]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 06:49:49 np0005478418 python3.9[40764]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 06:49:50 np0005478418 python3.9[40922]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 06:49:51 np0005478418 python3.9[41074]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:49:53 np0005478418 python3.9[41227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:49:54 np0005478418 python3.9[41379]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:49:54 np0005478418 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Oct  9 06:49:54 np0005478418 irqbalance[791]: IRQ 26 affinity is now unmanaged
Oct  9 06:49:54 np0005478418 python3.9[41502]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760006993.7557852-924-4988596154572/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:49:55 np0005478418 python3.9[41654]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:49:55 np0005478418 systemd[1]: Starting Load Kernel Modules...
Oct  9 06:49:55 np0005478418 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 06:49:55 np0005478418 kernel: Bridge firewalling registered
Oct  9 06:49:55 np0005478418 systemd-modules-load[41658]: Inserted module 'br_netfilter'
Oct  9 06:49:55 np0005478418 systemd[1]: Finished Load Kernel Modules.
Oct  9 06:49:56 np0005478418 python3.9[41813]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:49:57 np0005478418 python3.9[41936]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760006996.216818-993-889361176814/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:49:58 np0005478418 python3.9[42088]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:50:01 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:50:01 np0005478418 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  9 06:50:01 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:50:01 np0005478418 systemd[1]: Starting man-db-cache-update.service...
Oct  9 06:50:01 np0005478418 systemd[1]: Reloading.
Oct  9 06:50:01 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:50:01 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:50:03 np0005478418 python3.9[43607]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:50:04 np0005478418 python3.9[44609]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  9 06:50:04 np0005478418 python3.9[45468]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:50:05 np0005478418 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 06:50:05 np0005478418 systemd[1]: Finished man-db-cache-update.service.
Oct  9 06:50:05 np0005478418 systemd[1]: man-db-cache-update.service: Consumed 4.511s CPU time.
Oct  9 06:50:05 np0005478418 systemd[1]: run-r432d4b2c43d5481eb43ab7a19bba1118.service: Deactivated successfully.
Oct  9 06:50:05 np0005478418 python3.9[46112]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:50:05 np0005478418 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 06:50:06 np0005478418 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 06:50:07 np0005478418 python3.9[46485]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:50:07 np0005478418 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  9 06:50:07 np0005478418 systemd[1]: tuned.service: Deactivated successfully.
Oct  9 06:50:07 np0005478418 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  9 06:50:07 np0005478418 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 06:50:07 np0005478418 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 06:50:08 np0005478418 python3.9[46646]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  9 06:50:11 np0005478418 python3.9[46798]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:50:11 np0005478418 systemd[1]: Reloading.
Oct  9 06:50:11 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:50:12 np0005478418 python3.9[46986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:50:12 np0005478418 systemd[1]: Reloading.
Oct  9 06:50:12 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:50:13 np0005478418 python3.9[47175]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:50:14 np0005478418 python3.9[47328]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:50:14 np0005478418 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 06:50:15 np0005478418 python3.9[47481]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:50:17 np0005478418 python3.9[47643]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:50:17 np0005478418 python3.9[47796]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:50:17 np0005478418 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 06:50:17 np0005478418 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 06:50:17 np0005478418 systemd[1]: Stopping Apply Kernel Variables...
Oct  9 06:50:17 np0005478418 systemd[1]: Starting Apply Kernel Variables...
Oct  9 06:50:17 np0005478418 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  9 06:50:18 np0005478418 systemd[1]: Finished Apply Kernel Variables.
Oct  9 06:50:18 np0005478418 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 06:50:18 np0005478418 systemd[1]: session-9.scope: Consumed 2min 11.968s CPU time.
Oct  9 06:50:18 np0005478418 systemd-logind[800]: Session 9 logged out. Waiting for processes to exit.
Oct  9 06:50:18 np0005478418 systemd-logind[800]: Removed session 9.
Oct  9 06:50:23 np0005478418 systemd-logind[800]: New session 10 of user zuul.
Oct  9 06:50:23 np0005478418 systemd[1]: Started Session 10 of User zuul.
Oct  9 06:50:24 np0005478418 python3.9[47979]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:50:25 np0005478418 python3.9[48135]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  9 06:50:26 np0005478418 python3.9[48288]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 06:50:27 np0005478418 python3.9[48446]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 06:50:28 np0005478418 python3.9[48606]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:50:29 np0005478418 python3.9[48690]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 06:50:33 np0005478418 python3.9[48853]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:50:44 np0005478418 kernel: SELinux:  Converting 2725 SID table entries...
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:50:44 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:50:44 np0005478418 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  9 06:50:44 np0005478418 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 06:50:45 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:50:45 np0005478418 systemd[1]: Starting man-db-cache-update.service...
Oct  9 06:50:45 np0005478418 systemd[1]: Reloading.
Oct  9 06:50:45 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:50:45 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:50:45 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:50:46 np0005478418 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 06:50:46 np0005478418 systemd[1]: Finished man-db-cache-update.service.
Oct  9 06:50:46 np0005478418 systemd[1]: run-r298a76e645dd4339be7846eb1d958310.service: Deactivated successfully.
Oct  9 06:50:49 np0005478418 python3.9[49955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 06:50:49 np0005478418 systemd[1]: Reloading.
Oct  9 06:50:49 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:50:49 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:50:49 np0005478418 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 06:50:49 np0005478418 chown[49997]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 06:50:49 np0005478418 ovs-ctl[50002]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  9 06:50:49 np0005478418 ovs-ctl[50002]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  9 06:50:50 np0005478418 ovs-ctl[50002]: Starting ovsdb-server [  OK  ]
Oct  9 06:50:50 np0005478418 ovs-vsctl[50051]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 06:50:50 np0005478418 ovs-vsctl[50067]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d7fc944c-987d-4684-8e2b-75d871ca0238\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 06:50:50 np0005478418 ovs-ctl[50002]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 06:50:50 np0005478418 ovs-vsctl[50077]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 06:50:50 np0005478418 ovs-ctl[50002]: Enabling remote OVSDB managers [  OK  ]
Oct  9 06:50:50 np0005478418 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 06:50:50 np0005478418 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 06:50:50 np0005478418 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 06:50:50 np0005478418 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 06:50:50 np0005478418 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 06:50:50 np0005478418 ovs-ctl[50122]: Inserting openvswitch module [  OK  ]
Oct  9 06:50:50 np0005478418 ovs-ctl[50091]: Starting ovs-vswitchd [  OK  ]
Oct  9 06:50:50 np0005478418 ovs-ctl[50091]: Enabling remote OVSDB managers [  OK  ]
Oct  9 06:50:50 np0005478418 ovs-vsctl[50140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 06:50:50 np0005478418 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 06:50:50 np0005478418 systemd[1]: Starting Open vSwitch...
Oct  9 06:50:50 np0005478418 systemd[1]: Finished Open vSwitch.
Oct  9 06:50:52 np0005478418 python3.9[50291]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:50:53 np0005478418 python3.9[50443]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  9 06:50:54 np0005478418 kernel: SELinux:  Converting 2739 SID table entries...
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability open_perms=1
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability always_check_network=0
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 06:50:54 np0005478418 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 06:50:55 np0005478418 python3.9[50598]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:50:56 np0005478418 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  9 06:50:56 np0005478418 python3.9[50756]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:50:58 np0005478418 python3.9[50909]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:50:59 np0005478418 python3.9[51196]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 06:51:00 np0005478418 python3.9[51346]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:51:01 np0005478418 python3.9[51500]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:51:03 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:51:03 np0005478418 systemd[1]: Starting man-db-cache-update.service...
Oct  9 06:51:03 np0005478418 systemd[1]: Reloading.
Oct  9 06:51:03 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:51:03 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:51:03 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:51:03 np0005478418 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 06:51:03 np0005478418 systemd[1]: Finished man-db-cache-update.service.
Oct  9 06:51:03 np0005478418 systemd[1]: run-r16dd054d30c94da6a9415efc38b24289.service: Deactivated successfully.
Oct  9 06:51:04 np0005478418 python3.9[51816]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:51:04 np0005478418 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 06:51:04 np0005478418 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 06:51:04 np0005478418 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 06:51:04 np0005478418 systemd[1]: Stopping Network Manager...
Oct  9 06:51:04 np0005478418 NetworkManager[3947]: <info>  [1760007064.6495] caught SIGTERM, shutting down normally.
Oct  9 06:51:04 np0005478418 NetworkManager[3947]: <info>  [1760007064.6507] dhcp4 (eth0): canceled DHCP transaction
Oct  9 06:51:04 np0005478418 NetworkManager[3947]: <info>  [1760007064.6507] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:51:04 np0005478418 NetworkManager[3947]: <info>  [1760007064.6507] dhcp4 (eth0): state changed no lease
Oct  9 06:51:04 np0005478418 NetworkManager[3947]: <info>  [1760007064.6509] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 06:51:04 np0005478418 NetworkManager[3947]: <info>  [1760007064.6557] exiting (success)
Oct  9 06:51:04 np0005478418 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 06:51:04 np0005478418 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 06:51:04 np0005478418 systemd[1]: Stopped Network Manager.
Oct  9 06:51:04 np0005478418 systemd[1]: NetworkManager.service: Consumed 10.544s CPU time, 4.1M memory peak, read 0B from disk, written 31.0K to disk.
Oct  9 06:51:04 np0005478418 systemd[1]: Starting Network Manager...
Oct  9 06:51:04 np0005478418 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7070] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:270bcca3-191a-457e-9edf-c7e43152a098)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7072] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7122] manager[0x55cd6ae61090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 06:51:04 np0005478418 systemd[1]: Starting Hostname Service...
Oct  9 06:51:04 np0005478418 systemd[1]: Started Hostname Service.
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7871] hostname: hostname: using hostnamed
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7872] hostname: static hostname changed from (none) to "compute-0"
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7875] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7879] manager[0x55cd6ae61090]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7879] manager[0x55cd6ae61090]: rfkill: WWAN hardware radio set enabled
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7897] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7904] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7904] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7904] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7905] manager: Networking is enabled by state file
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7906] settings: Loaded settings plugin: keyfile (internal)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7908] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7931] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7938] dhcp: init: Using DHCP client 'internal'
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7940] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7943] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7946] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7952] device (lo): Activation: starting connection 'lo' (e9a9d37c-53bf-4ae6-939b-c44f03c26d8d)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7956] device (eth0): carrier: link connected
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7959] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7962] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7963] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7967] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7973] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7977] device (eth1): carrier: link connected
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7980] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7983] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71) (indicated)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7984] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7987] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.7992] device (eth1): Activation: starting connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 06:51:04 np0005478418 systemd[1]: Started Network Manager.
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8003] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8008] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8010] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8012] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8014] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8016] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8018] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8021] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8031] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8045] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8048] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8057] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8073] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8085] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8087] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8093] device (lo): Activation: successful, device activated.
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8101] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8109] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 06:51:04 np0005478418 systemd[1]: Starting Network Manager Wait Online...
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8176] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8182] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8191] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8196] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8200] device (eth1): Activation: successful, device activated.
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8213] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8215] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8219] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8225] device (eth0): Activation: successful, device activated.
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8231] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 06:51:04 np0005478418 NetworkManager[51824]: <info>  [1760007064.8235] manager: startup complete
Oct  9 06:51:04 np0005478418 systemd[1]: Finished Network Manager Wait Online.
Oct  9 06:51:05 np0005478418 python3.9[52042]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:51:10 np0005478418 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 06:51:10 np0005478418 systemd[1]: Starting man-db-cache-update.service...
Oct  9 06:51:10 np0005478418 systemd[1]: Reloading.
Oct  9 06:51:10 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:51:10 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:51:10 np0005478418 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 06:51:10 np0005478418 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 06:51:10 np0005478418 systemd[1]: Finished man-db-cache-update.service.
Oct  9 06:51:10 np0005478418 systemd[1]: run-re002454e169b4e1b8d0c448a63679731.service: Deactivated successfully.
Oct  9 06:51:13 np0005478418 python3.9[52506]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:51:14 np0005478418 python3.9[52658]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:14 np0005478418 python3.9[52812]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:14 np0005478418 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 06:51:15 np0005478418 python3.9[52964]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:16 np0005478418 python3.9[53116]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:16 np0005478418 python3.9[53268]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:17 np0005478418 python3.9[53420]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:51:18 np0005478418 python3.9[53543]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007077.0915582-647-53048151388937/.source _original_basename=.5fru45pr follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:18 np0005478418 python3.9[53695]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:19 np0005478418 python3.9[53847]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  9 06:51:20 np0005478418 python3.9[53999]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:22 np0005478418 python3.9[54426]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  9 06:51:23 np0005478418 ansible-async_wrapper.py[54601]: Invoked with j468905251958 300 /home/zuul/.ansible/tmp/ansible-tmp-1760007083.0187948-845-199272377610771/AnsiballZ_edpm_os_net_config.py _
Oct  9 06:51:23 np0005478418 ansible-async_wrapper.py[54604]: Starting module and watcher
Oct  9 06:51:23 np0005478418 ansible-async_wrapper.py[54604]: Start watching 54605 (300)
Oct  9 06:51:23 np0005478418 ansible-async_wrapper.py[54605]: Start module (54605)
Oct  9 06:51:23 np0005478418 ansible-async_wrapper.py[54601]: Return async_wrapper task started.
Oct  9 06:51:24 np0005478418 python3.9[54606]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  9 06:51:24 np0005478418 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 06:51:24 np0005478418 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 06:51:24 np0005478418 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 06:51:24 np0005478418 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 06:51:24 np0005478418 kernel: cfg80211: failed to load regulatory.db
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.8641] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.8657] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9145] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9147] audit: op="connection-add" uuid="bcc33efc-3b3b-435c-aca9-5aaebd912cbc" name="br-ex-br" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9161] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9162] audit: op="connection-add" uuid="1827b2a2-a598-410c-876f-6a34fb846274" name="br-ex-port" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9173] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9175] audit: op="connection-add" uuid="b62c0c8b-048b-4194-8c6b-12421f5f8225" name="eth1-port" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9186] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9188] audit: op="connection-add" uuid="67dfd2b5-baec-45dc-b86e-4fdc073630f2" name="vlan20-port" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9199] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9201] audit: op="connection-add" uuid="7306f314-0e92-4506-b51c-f774deaa7421" name="vlan21-port" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9212] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9214] audit: op="connection-add" uuid="a393cd30-d2f6-426d-abbc-fb10a3e55456" name="vlan22-port" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9226] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9228] audit: op="connection-add" uuid="35e3f4f0-1d07-4ee8-ad64-4b26247f6f22" name="vlan23-port" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9246] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9260] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9262] audit: op="connection-add" uuid="c1e9250a-b28c-492c-8b7c-9cebcf7f3092" name="br-ex-if" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9293] audit: op="connection-update" uuid="9db59092-af47-5628-a1ce-922d34723c71" name="ci-private-network" args="connection.master,connection.slave-type,connection.controller,connection.port-type,connection.timestamp,ovs-external-ids.data,ovs-interface.type,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routes,ipv6.dns,ipv6.routing-rules,ipv4.addresses,ipv4.dns,ipv4.routes,ipv4.method,ipv4.never-default,ipv4.routing-rules" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9307] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9309] audit: op="connection-add" uuid="3e649017-d999-4000-a1a8-779dad9db729" name="vlan20-if" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9325] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9327] audit: op="connection-add" uuid="e6ab45b5-ed77-42ca-af9e-b8d856cd2793" name="vlan21-if" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9341] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9343] audit: op="connection-add" uuid="3aadbb0c-9529-4d30-8cb0-3397cce8a89a" name="vlan22-if" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9357] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9359] audit: op="connection-add" uuid="3a96d9fc-e83a-47d6-97a1-cf40c6ea5040" name="vlan23-if" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9370] audit: op="connection-delete" uuid="d9ec873a-c659-38cd-905a-e5b4323f1c1d" name="Wired connection 1" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9383] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9394] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9399] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (bcc33efc-3b3b-435c-aca9-5aaebd912cbc)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9400] audit: op="connection-activate" uuid="bcc33efc-3b3b-435c-aca9-5aaebd912cbc" name="br-ex-br" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9404] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9412] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9417] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1827b2a2-a598-410c-876f-6a34fb846274)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9419] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9425] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9431] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b62c0c8b-048b-4194-8c6b-12421f5f8225)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9434] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9441] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9445] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (67dfd2b5-baec-45dc-b86e-4fdc073630f2)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9448] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9454] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9458] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7306f314-0e92-4506-b51c-f774deaa7421)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9460] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9466] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9470] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a393cd30-d2f6-426d-abbc-fb10a3e55456)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9472] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9479] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9483] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (35e3f4f0-1d07-4ee8-ad64-4b26247f6f22)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9484] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9486] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9488] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9494] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9498] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9502] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c1e9250a-b28c-492c-8b7c-9cebcf7f3092)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9503] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9506] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9508] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9509] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9510] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9520] device (eth1): disconnecting for new activation request.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9521] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9524] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9526] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9528] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9531] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9534] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9538] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3e649017-d999-4000-a1a8-779dad9db729)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9539] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9543] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9545] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9547] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9549] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9553] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9556] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e6ab45b5-ed77-42ca-af9e-b8d856cd2793)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9557] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9559] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9561] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9562] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9565] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9569] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9573] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (3aadbb0c-9529-4d30-8cb0-3397cce8a89a)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9574] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9577] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9578] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9580] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9582] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9585] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9589] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (3a96d9fc-e83a-47d6-97a1-cf40c6ea5040)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9590] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9592] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9594] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9595] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9597] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9607] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9609] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9612] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9614] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9620] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9624] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9629] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9632] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9634] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9639] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 kernel: ovs-system: entered promiscuous mode
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9643] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9646] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9648] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9652] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9655] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9658] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9660] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9665] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 systemd-udevd[54612]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:51:25 np0005478418 kernel: Timeout policy base is empty
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9668] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9671] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9673] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9677] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9681] dhcp4 (eth0): canceled DHCP transaction
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9681] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9681] dhcp4 (eth0): state changed no lease
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9683] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9697] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9699] audit: op="device-reapply" interface="eth1" ifindex=3 pid=54607 uid=0 result="fail" reason="Device is not activated"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9743] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9748] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9751] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9755] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9802] device (eth1): disconnecting for new activation request.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9803] audit: op="connection-activate" uuid="9db59092-af47-5628-a1ce-922d34723c71" name="ci-private-network" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9814] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9819] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9887] device (eth1): Activation: starting connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9893] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9894] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9906] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9910] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9916] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9920] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9924] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9928] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9931] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9933] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9934] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9935] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9936] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9937] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54607 uid=0 result="success"
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9938] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9943] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9947] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9949] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9953] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9955] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9959] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9962] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9965] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9968] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9971] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9975] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 06:51:25 np0005478418 NetworkManager[51824]: <info>  [1760007085.9978] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0002] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0003] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0010] device (eth1): Activation: successful, device activated.
Oct  9 06:51:26 np0005478418 kernel: br-ex: entered promiscuous mode
Oct  9 06:51:26 np0005478418 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 06:51:26 np0005478418 kernel: vlan22: entered promiscuous mode
Oct  9 06:51:26 np0005478418 systemd-udevd[54611]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0166] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0176] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0196] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0197] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0202] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 06:51:26 np0005478418 kernel: vlan21: entered promiscuous mode
Oct  9 06:51:26 np0005478418 systemd-udevd[54613]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:51:26 np0005478418 kernel: vlan23: entered promiscuous mode
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0279] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0290] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0305] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0307] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0312] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 06:51:26 np0005478418 kernel: vlan20: entered promiscuous mode
Oct  9 06:51:26 np0005478418 systemd-udevd[54721]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0351] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0363] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0380] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0381] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0387] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0423] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0431] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0444] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0461] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0468] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0472] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0481] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0489] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0491] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 06:51:26 np0005478418 NetworkManager[51824]: <info>  [1760007086.0498] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.1686] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.2904] checkpoint[0x55cd6ae36950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.2906] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.5454] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.5472] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.7602] audit: op="networking-control" arg="global-dns-configuration" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.7629] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.7656] audit: op="networking-control" arg="global-dns-configuration" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 python3.9[54967]: ansible-ansible.legacy.async_status Invoked with jid=j468905251958.54601 mode=status _async_dir=/root/.ansible_async
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.7705] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.9086] checkpoint[0x55cd6ae36a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  9 06:51:27 np0005478418 NetworkManager[51824]: <info>  [1760007087.9089] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54607 uid=0 result="success"
Oct  9 06:51:27 np0005478418 ansible-async_wrapper.py[54605]: Module complete (54605)
Oct  9 06:51:28 np0005478418 ansible-async_wrapper.py[54604]: Done in kid B.
Oct  9 06:51:31 np0005478418 python3.9[55071]: ansible-ansible.legacy.async_status Invoked with jid=j468905251958.54601 mode=status _async_dir=/root/.ansible_async
Oct  9 06:51:31 np0005478418 python3.9[55171]: ansible-ansible.legacy.async_status Invoked with jid=j468905251958.54601 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 06:51:32 np0005478418 python3.9[55323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:51:33 np0005478418 python3.9[55446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007092.199424-926-210700387198815/.source.returncode _original_basename=.y85sklj2 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:33 np0005478418 python3.9[55598]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:51:34 np0005478418 python3.9[55721]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007093.4243233-974-110988246559149/.source.cfg _original_basename=.jz96p54y follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:34 np0005478418 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 06:51:35 np0005478418 python3.9[55876]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:51:35 np0005478418 systemd[1]: Reloading Network Manager...
Oct  9 06:51:35 np0005478418 NetworkManager[51824]: <info>  [1760007095.2584] audit: op="reload" arg="0" pid=55880 uid=0 result="success"
Oct  9 06:51:35 np0005478418 NetworkManager[51824]: <info>  [1760007095.2596] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  9 06:51:35 np0005478418 systemd[1]: Reloaded Network Manager.
Oct  9 06:51:35 np0005478418 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 06:51:35 np0005478418 systemd[1]: session-10.scope: Consumed 48.342s CPU time.
Oct  9 06:51:35 np0005478418 systemd-logind[800]: Session 10 logged out. Waiting for processes to exit.
Oct  9 06:51:35 np0005478418 systemd-logind[800]: Removed session 10.
Oct  9 06:51:40 np0005478418 systemd-logind[800]: New session 11 of user zuul.
Oct  9 06:51:40 np0005478418 systemd[1]: Started Session 11 of User zuul.
Oct  9 06:51:42 np0005478418 python3.9[56064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:51:43 np0005478418 python3.9[56218]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:51:44 np0005478418 python3.9[56412]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:51:44 np0005478418 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 06:51:44 np0005478418 systemd[1]: session-11.scope: Consumed 2.112s CPU time.
Oct  9 06:51:44 np0005478418 systemd-logind[800]: Session 11 logged out. Waiting for processes to exit.
Oct  9 06:51:44 np0005478418 systemd-logind[800]: Removed session 11.
Oct  9 06:51:45 np0005478418 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 06:51:50 np0005478418 systemd-logind[800]: New session 12 of user zuul.
Oct  9 06:51:50 np0005478418 systemd[1]: Started Session 12 of User zuul.
Oct  9 06:51:51 np0005478418 python3.9[56594]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:51:52 np0005478418 python3.9[56748]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:51:53 np0005478418 python3.9[56904]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:51:54 np0005478418 python3.9[56989]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:51:56 np0005478418 python3.9[57142]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:51:57 np0005478418 python3.9[57338]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:51:58 np0005478418 python3.9[57490]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:51:58 np0005478418 systemd[1]: var-lib-containers-storage-overlay-compat657651618-merged.mount: Deactivated successfully.
Oct  9 06:51:58 np0005478418 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck696092930-merged.mount: Deactivated successfully.
Oct  9 06:51:58 np0005478418 podman[57491]: 2025-10-09 10:51:58.584308043 +0000 UTC m=+0.063150515 system refresh
Oct  9 06:51:59 np0005478418 python3.9[57653]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:51:59 np0005478418 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 06:52:00 np0005478418 python3.9[57776]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007118.9058702-197-18302998590215/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ab9cd18adf32a35a215e709609e04e45b85d587f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:00 np0005478418 python3.9[57928]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:01 np0005478418 python3.9[58051]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760007120.394524-242-228296548466113/.source.conf follow=False _original_basename=registries.conf.j2 checksum=9a43130349070f74a95160e8548d2fb35bfd6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:02 np0005478418 python3.9[58203]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:02 np0005478418 python3.9[58355]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:03 np0005478418 python3.9[58507]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:04 np0005478418 python3.9[58659]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:05 np0005478418 python3.9[58811]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:52:07 np0005478418 python3.9[58964]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:52:08 np0005478418 python3.9[59118]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:52:08 np0005478418 python3.9[59270]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:52:09 np0005478418 python3.9[59422]: ansible-service_facts Invoked
Oct  9 06:52:09 np0005478418 network[59439]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 06:52:09 np0005478418 network[59440]: 'network-scripts' will be removed from distribution in near future.
Oct  9 06:52:09 np0005478418 network[59441]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 06:52:15 np0005478418 python3.9[59895]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 06:52:18 np0005478418 python3.9[60048]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  9 06:52:19 np0005478418 python3.9[60200]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:20 np0005478418 python3.9[60325]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007139.181299-638-181673022543051/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:20 np0005478418 python3.9[60479]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:21 np0005478418 python3.9[60604]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007140.5116532-683-225268696340306/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:23 np0005478418 python3.9[60758]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:24 np0005478418 python3.9[60912]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:52:25 np0005478418 python3.9[60996]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:52:27 np0005478418 python3.9[61150]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:52:27 np0005478418 python3.9[61234]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:52:27 np0005478418 chronyd[799]: chronyd exiting
Oct  9 06:52:27 np0005478418 systemd[1]: Stopping NTP client/server...
Oct  9 06:52:27 np0005478418 systemd[1]: chronyd.service: Deactivated successfully.
Oct  9 06:52:27 np0005478418 systemd[1]: Stopped NTP client/server.
Oct  9 06:52:28 np0005478418 systemd[1]: Starting NTP client/server...
Oct  9 06:52:28 np0005478418 chronyd[61243]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 06:52:28 np0005478418 chronyd[61243]: Frequency -32.386 +/- 0.117 ppm read from /var/lib/chrony/drift
Oct  9 06:52:28 np0005478418 chronyd[61243]: Loaded seccomp filter (level 2)
Oct  9 06:52:28 np0005478418 systemd[1]: Started NTP client/server.
Oct  9 06:52:28 np0005478418 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 06:52:28 np0005478418 systemd[1]: session-12.scope: Consumed 23.759s CPU time.
Oct  9 06:52:28 np0005478418 systemd-logind[800]: Session 12 logged out. Waiting for processes to exit.
Oct  9 06:52:28 np0005478418 systemd-logind[800]: Removed session 12.
Oct  9 06:52:34 np0005478418 systemd-logind[800]: New session 13 of user zuul.
Oct  9 06:52:34 np0005478418 systemd[1]: Started Session 13 of User zuul.
Oct  9 06:52:35 np0005478418 python3.9[61424]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:36 np0005478418 python3.9[61576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:37 np0005478418 python3.9[61699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007155.6475055-62-67481272239336/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:37 np0005478418 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 06:52:37 np0005478418 systemd[1]: session-13.scope: Consumed 1.570s CPU time.
Oct  9 06:52:37 np0005478418 systemd-logind[800]: Session 13 logged out. Waiting for processes to exit.
Oct  9 06:52:37 np0005478418 systemd-logind[800]: Removed session 13.
Oct  9 06:52:42 np0005478418 systemd-logind[800]: New session 14 of user zuul.
Oct  9 06:52:42 np0005478418 systemd[1]: Started Session 14 of User zuul.
Oct  9 06:52:43 np0005478418 python3.9[61877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:52:44 np0005478418 python3.9[62033]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:45 np0005478418 python3.9[62208]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:46 np0005478418 python3.9[62331]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760007164.9758096-83-86055951090809/.source.json _original_basename=.wizdvfip follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:47 np0005478418 python3.9[62483]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:47 np0005478418 python3.9[62606]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007166.8625689-152-151550780388760/.source _original_basename=.p0e4fqfg follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:48 np0005478418 python3.9[62758]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:49 np0005478418 python3.9[62910]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:49 np0005478418 python3.9[63033]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760007168.8104775-224-103258339969024/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:50 np0005478418 python3.9[63185]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:51 np0005478418 python3.9[63308]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760007170.0517633-224-97707983141611/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:52:51 np0005478418 python3.9[63460]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:52 np0005478418 python3.9[63612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:52 np0005478418 python3.9[63735]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007171.9155471-335-52523936686522/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:53 np0005478418 python3.9[63887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:54 np0005478418 python3.9[64010]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007173.1167352-380-58018149915683/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:55 np0005478418 python3.9[64162]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:52:55 np0005478418 systemd[1]: Reloading.
Oct  9 06:52:55 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:52:55 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:52:55 np0005478418 systemd[1]: Reloading.
Oct  9 06:52:55 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:52:55 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:52:55 np0005478418 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 06:52:55 np0005478418 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 06:52:56 np0005478418 python3.9[64389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:56 np0005478418 python3.9[64512]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007175.900627-449-71589480820691/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:57 np0005478418 python3.9[64664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:52:58 np0005478418 python3.9[64787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007177.0677776-494-231136986709547/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:52:58 np0005478418 python3.9[64939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:52:58 np0005478418 systemd[1]: Reloading.
Oct  9 06:52:59 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:52:59 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:52:59 np0005478418 systemd[1]: Reloading.
Oct  9 06:52:59 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:52:59 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:52:59 np0005478418 systemd[1]: Starting Create netns directory...
Oct  9 06:52:59 np0005478418 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 06:52:59 np0005478418 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 06:52:59 np0005478418 systemd[1]: Finished Create netns directory.
Oct  9 06:53:00 np0005478418 python3.9[65165]: ansible-ansible.builtin.service_facts Invoked
Oct  9 06:53:00 np0005478418 network[65182]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 06:53:00 np0005478418 network[65183]: 'network-scripts' will be removed from distribution in near future.
Oct  9 06:53:00 np0005478418 network[65184]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 06:53:04 np0005478418 python3.9[65448]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:53:04 np0005478418 systemd[1]: Reloading.
Oct  9 06:53:04 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:53:04 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:53:04 np0005478418 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  9 06:53:05 np0005478418 iptables.init[65488]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  9 06:53:05 np0005478418 iptables.init[65488]: iptables: Flushing firewall rules: [  OK  ]
Oct  9 06:53:05 np0005478418 systemd[1]: iptables.service: Deactivated successfully.
Oct  9 06:53:05 np0005478418 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  9 06:53:05 np0005478418 python3.9[65684]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:53:06 np0005478418 python3.9[65838]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:53:06 np0005478418 systemd[1]: Reloading.
Oct  9 06:53:07 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:53:07 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:53:07 np0005478418 systemd[1]: Starting Netfilter Tables...
Oct  9 06:53:07 np0005478418 systemd[1]: Finished Netfilter Tables.
Oct  9 06:53:08 np0005478418 python3.9[66029]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:53:09 np0005478418 python3.9[66182]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:09 np0005478418 python3.9[66307]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007188.568051-701-185112904499936/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:10 np0005478418 python3.9[66458]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:53:35 np0005478418 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 06:53:35 np0005478418 systemd[1]: session-14.scope: Consumed 18.341s CPU time.
Oct  9 06:53:35 np0005478418 systemd-logind[800]: Session 14 logged out. Waiting for processes to exit.
Oct  9 06:53:35 np0005478418 systemd-logind[800]: Removed session 14.
Oct  9 06:53:47 np0005478418 systemd-logind[800]: New session 15 of user zuul.
Oct  9 06:53:47 np0005478418 systemd[1]: Started Session 15 of User zuul.
Oct  9 06:53:48 np0005478418 python3.9[66651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:53:50 np0005478418 python3.9[66807]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:50 np0005478418 python3.9[66982]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:51 np0005478418 python3.9[67060]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=._uhbql9f recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:52 np0005478418 python3.9[67212]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:52 np0005478418 python3.9[67290]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.3jielvq0 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:53 np0005478418 python3.9[67442]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:53:53 np0005478418 python3.9[67594]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:54 np0005478418 python3.9[67672]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:53:54 np0005478418 python3.9[67824]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:55 np0005478418 python3.9[67902]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 06:53:56 np0005478418 python3.9[68054]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:56 np0005478418 python3.9[68206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:57 np0005478418 python3.9[68284]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:57 np0005478418 python3.9[68436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:53:58 np0005478418 python3.9[68514]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:53:59 np0005478418 python3.9[68666]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:53:59 np0005478418 systemd[1]: Reloading.
Oct  9 06:53:59 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:53:59 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:54:00 np0005478418 python3.9[68855]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:00 np0005478418 python3.9[68933]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:01 np0005478418 python3.9[69085]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:01 np0005478418 python3.9[69163]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:02 np0005478418 python3.9[69315]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 06:54:02 np0005478418 systemd[1]: Reloading.
Oct  9 06:54:02 np0005478418 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 06:54:02 np0005478418 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 06:54:03 np0005478418 systemd[1]: Starting Create netns directory...
Oct  9 06:54:03 np0005478418 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 06:54:03 np0005478418 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 06:54:03 np0005478418 systemd[1]: Finished Create netns directory.
Oct  9 06:54:03 np0005478418 python3.9[69506]: ansible-ansible.builtin.service_facts Invoked
Oct  9 06:54:04 np0005478418 network[69523]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 06:54:04 np0005478418 network[69524]: 'network-scripts' will be removed from distribution in near future.
Oct  9 06:54:04 np0005478418 network[69525]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 06:54:09 np0005478418 python3.9[69788]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:09 np0005478418 python3.9[69866]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:10 np0005478418 python3.9[70018]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:10 np0005478418 python3.9[70170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:11 np0005478418 python3.9[70293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007250.4398134-608-169506512056672/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:12 np0005478418 python3.9[70445]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 06:54:12 np0005478418 systemd[1]: Starting Time & Date Service...
Oct  9 06:54:12 np0005478418 systemd[1]: Started Time & Date Service.
Oct  9 06:54:13 np0005478418 python3.9[70601]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:14 np0005478418 python3.9[70753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:14 np0005478418 python3.9[70876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007253.8932502-713-242748230881187/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:15 np0005478418 python3.9[71028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:16 np0005478418 python3.9[71151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760007255.1035342-758-32657484643399/.source.yaml _original_basename=.3djz2ama follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:16 np0005478418 python3.9[71303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:17 np0005478418 python3.9[71426]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007256.4124005-803-49140442215934/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:18 np0005478418 python3.9[71578]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:54:18 np0005478418 python3.9[71731]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:54:19 np0005478418 python3[71884]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 06:54:20 np0005478418 python3.9[72036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:21 np0005478418 python3.9[72159]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007260.0801053-920-197393636315441/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:21 np0005478418 python3.9[72311]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:22 np0005478418 python3.9[72434]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007261.3738801-965-50754872140328/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:23 np0005478418 python3.9[72586]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:23 np0005478418 python3.9[72709]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007262.682737-1010-114408316729534/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:24 np0005478418 python3.9[72861]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:25 np0005478418 python3.9[72984]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007264.035505-1055-73560138996587/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:25 np0005478418 python3.9[73136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 06:54:26 np0005478418 python3.9[73259]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760007265.3355982-1100-81300787403417/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:27 np0005478418 python3.9[73411]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:27 np0005478418 python3.9[73563]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:54:28 np0005478418 python3.9[73722]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:29 np0005478418 python3.9[73875]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:30 np0005478418 python3.9[74027]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:30 np0005478418 python3.9[74179]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 06:54:31 np0005478418 python3.9[74332]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 06:54:32 np0005478418 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 06:54:32 np0005478418 systemd[1]: session-15.scope: Consumed 29.154s CPU time.
Oct  9 06:54:32 np0005478418 systemd-logind[800]: Session 15 logged out. Waiting for processes to exit.
Oct  9 06:54:32 np0005478418 systemd-logind[800]: Removed session 15.
Oct  9 06:54:37 np0005478418 systemd-logind[800]: New session 16 of user zuul.
Oct  9 06:54:37 np0005478418 systemd[1]: Started Session 16 of User zuul.
Oct  9 06:54:37 np0005478418 chronyd[61243]: Selected source 162.159.200.123 (pool.ntp.org)
Oct  9 06:54:38 np0005478418 python3.9[74513]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  9 06:54:39 np0005478418 python3.9[74665]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:54:40 np0005478418 python3.9[74817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:54:41 np0005478418 python3.9[74969]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxKLVxHyH08B9uu6Rm1nQ0Vpvbq9Sr2uPzp0x1kFoVshseJGBR1aQ8aXMDjLUpOK/h/2Lzsk9dMNnft3J/lrIECeONO3LW+OpIpzsgd2H7/G7RHAORbsyVpdc/upXnCR0syeObXLeDHO0UgBZaN0tSEDCFp5Py6M2hl140Ax7KaCi52KEqwPixt+JD0ci0LfQN5U1cLONQnY9BFbE3cemmOtDvlWowtgnvfiqoS55P9I2QOvukyTd0D+R2Xw/k3pawODVgg7HzozcR35nbsthHe6jsK8t097qjv6eoX4wVKKLJ9Bz9MwaRg2URAx1m/iSXg4cZ6otSi5/z1TT00vGv5XQKCDx91yPwMRPouR5kt9EpIw1h1p2c4fZuTWbTqDUga2wggJwnReh1X5u8sz+DZ0EFNwY1zXHg11clzEy4124BA2IN+tbGMyRVKS2qJuUTJxtNueykLffNdFt4fdaPDYNrG+akKCIK2rexyW+cBOMTYIvqecM8K7NE2XWITmE=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA0AFUPu3912eDYJ0yKANjhzJHFhMtgvPu30HHmncVxx#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBZE/n+BKwRmp1BXfGXHBw0+2VvETUrzEj1DaM1+qc+DRoxXeCKNBK9rvEnhbGfrbPaXtJR+AFQC0tPw8GL2upc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP+GLxJ958NjbspQUGR2XUJ6t0fe1Q5Ft+CDPLmvNWjCPaAryAMCk3tg/p8Q+wDH2lQ73JoPdHWhuuHz8hSFXbmI1Mtk73dAu2LVElHjoGO1HOctqtDPD1aJ9MNEw1E2fF0NRRe9WVRv1fDuz68i7o0XOeqvTFS+fgOiP8/D+TWyCJiqqwL8+G8By1A0WrM/A4VNDicX+hS/G37FFVWWW82olKKYEGN2KowVejfwcY1KcvHwlt9G1onwvM78QHO9KGM0cANq6nrPCZWzOGOldPzj+vVf+eQQXJgw+sH1SURjvhdMXkn9PmQjVTumM23Ile2OEs4G4luNpsId3ZfyXgEhBG0Kyb7ZQ58IDpLUQagTRWkp2EKorzohudRa3RbzEMOQ9WfgNJ9CsdDdT3tIO0JNYXvOhrIc8SzcvDDRVS9IMMKo+/aRA3g43sBnMVvnjgr36iV1pr5YvZ+VK7MvV711UuI876eTOblppgU51MPqAAlfcp2AkqHQ208QDJfB0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHNPyml9Ng1dBLerJtVtb/svrCvbgDyHgpwptP/9q9vq#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNBmjb2Kh7q3Kp9/IAbPY4YfPR3DtDpHA6liOxXICBiMVtawbYdR7jyp05RLrpvo9c6N9Y7iQ9LrRMb/nx3KP3k=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAgVuVUZTZ6Zj/kkd8a4HGOpzkvc5d+UeZV2UzGPzMWV+fCqSfSCoPnULP8pUmX1JZqM/7H+gwpvRjWslRL0rBC9g8SKZkCFKpkuVEi3urpYsb2kvSpiWCat6F/lKpL/awAGR+t2zdLMYYvXL6QsJfTirTkZhZnS80VVTZw8uXIJU2F61KbUozFTzo8aktjdsl8AttuNmPGes+Y3Mj9dVvgIS/VeLuaNiMhvRV/dPw0vOUgCBfp7b2pJTm7YbUM3rq70ggcXcLZNcD+wF4p5laTqMZj2tD8yUGuGpKuQ/DXMadBxB+ov/jjlTpSBtFI29Frx/X9Uqfp6I71NIrUuapblkjJNEWk6clq27jIYR+XbvWejEwzO5U6dq+HHDSTplC5BGiuQP0ZeTfW7Gppq+Vtprg9DRiYTxl1PSA8el0dZs0cYbknfLygrjxB3K6wEIaq9MsbzWueCoroUUq8+gltee5aq4PVRuhzi3I3629L8D9gfAyQxLxsYMmTP210L0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOEIsvHAUWloO70GmungdMooalfJwWDaM/GfuNFnBnLN#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKqWQJkV/T3B6GfyL5BYHGOGs1VUtGBTkn1dxcvpX0u9SL37ibofnOYzQSDs/ZczahiSFPBJRMo4IRfm8h5Sl50=#012 create=True mode=0644 path=/tmp/ansible.gjlxmmzp state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:42 np0005478418 python3.9[75121]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gjlxmmzp' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:54:42 np0005478418 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 06:54:43 np0005478418 python3.9[75277]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gjlxmmzp state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:43 np0005478418 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 06:54:43 np0005478418 systemd[1]: session-16.scope: Consumed 3.271s CPU time.
Oct  9 06:54:43 np0005478418 systemd-logind[800]: Session 16 logged out. Waiting for processes to exit.
Oct  9 06:54:43 np0005478418 systemd-logind[800]: Removed session 16.
Oct  9 06:54:48 np0005478418 systemd-logind[800]: New session 17 of user zuul.
Oct  9 06:54:48 np0005478418 systemd[1]: Started Session 17 of User zuul.
Oct  9 06:54:49 np0005478418 python3.9[75456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:54:51 np0005478418 python3.9[75612]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 06:54:51 np0005478418 python3.9[75766]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 06:54:52 np0005478418 python3.9[75919]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:54:53 np0005478418 python3.9[76072]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:54:54 np0005478418 python3.9[76226]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:54:55 np0005478418 python3.9[76381]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:54:55 np0005478418 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 06:54:55 np0005478418 systemd[1]: session-17.scope: Consumed 4.170s CPU time.
Oct  9 06:54:55 np0005478418 systemd-logind[800]: Session 17 logged out. Waiting for processes to exit.
Oct  9 06:54:55 np0005478418 systemd-logind[800]: Removed session 17.
Oct  9 06:55:01 np0005478418 systemd-logind[800]: New session 18 of user zuul.
Oct  9 06:55:01 np0005478418 systemd[1]: Started Session 18 of User zuul.
Oct  9 06:55:02 np0005478418 python3.9[76560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:55:03 np0005478418 python3.9[76716]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 06:55:03 np0005478418 python3.9[76800]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 06:55:06 np0005478418 python3.9[76951]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 06:55:07 np0005478418 python3.9[77104]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:55:08 np0005478418 python3.9[77256]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:55:08 np0005478418 python3.9[77408]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Core libraries or services have been updated since boot-up:#012  * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 06:55:09 np0005478418 python3.9[77558]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 06:55:10 np0005478418 python3.9[77708]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:55:10 np0005478418 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 06:55:11 np0005478418 python3.9[77859]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 06:55:12 np0005478418 python3.9[78011]: ansible-ansible.legacy.setup Invoked with gather_subset=['min'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 06:55:12 np0005478418 python3.9[78124]: ansible-ansible.legacy.find Invoked with paths=['/sbin', '/bin', '/usr/sbin', '/usr/bin', '/usr/local/sbin'] patterns=['shutdown'] file_type=any read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 06:55:13 np0005478418 systemd-logind[800]: Creating /run/nologin, blocking further logins...
Oct  9 06:55:13 np0005478418 systemd-logind[800]: System is rebooting (Reboot initiated by Ansible).
Oct  9 06:55:13 np0005478418 systemd[1]: Stopping Session 1 of User zuul...
Oct  9 06:55:13 np0005478418 systemd[1]: Stopping Session 18 of User zuul...
Oct  9 06:55:13 np0005478418 systemd[1]: Removed slice Slice /system/modprobe.
Oct  9 06:55:13 np0005478418 systemd[1]: Removed slice Slice /system/sshd-keygen.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target Cloud-init target.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target rpc_pipefs.target.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target RPC Port Mapper.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target Timer Units.
Oct  9 06:55:13 np0005478418 systemd[1]: dnf-makecache.timer: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped dnf makecache --timer.
Oct  9 06:55:13 np0005478418 systemd[1]: logrotate.timer: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped Daily rotation of log files.
Oct  9 06:55:13 np0005478418 systemd[1]: systemd-tmpfiles-clean.timer: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped Daily Cleanup of Temporary Directories.
Oct  9 06:55:13 np0005478418 systemd[1]: unbound-anchor.timer: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped daily update of the root trust anchor for DNSSEC.
Oct  9 06:55:13 np0005478418 systemd[1]: lvm2-lvmpolld.socket: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Closed LVM2 poll daemon socket.
Oct  9 06:55:13 np0005478418 systemd[1]: systemd-coredump.socket: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Closed Process Core Dump Socket.
Oct  9 06:55:13 np0005478418 systemd[1]: systemd-rfkill.socket: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Closed Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 06:55:13 np0005478418 systemd[1]: Unmounting RPC Pipe File System...
Oct  9 06:55:13 np0005478418 systemd[1]: cloud-final.service: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped Cloud-init: Final Stage.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target Multi-User System.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target Login Prompts.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopping NTP client/server...
Oct  9 06:55:13 np0005478418 systemd[1]: cloud-config.service: Deactivated successfully.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped Cloud-init: Config Stage.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopped target Cloud-config availability.
Oct  9 06:55:13 np0005478418 systemd[1]: Stopping Command Scheduler...
Oct  9 10:55:22 compute-0 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 10:55:22 compute-0 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 10:55:22 compute-0 kernel: BIOS-provided physical RAM map:
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 10:55:22 compute-0 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  9 10:55:22 compute-0 kernel: NX (Execute Disable) protection: active
Oct  9 10:55:22 compute-0 kernel: APIC: Static calls initialized
Oct  9 10:55:22 compute-0 kernel: SMBIOS 2.8 present.
Oct  9 10:55:22 compute-0 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  9 10:55:22 compute-0 kernel: Hypervisor detected: KVM
Oct  9 10:55:22 compute-0 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 10:55:22 compute-0 kernel: kvm-clock: using sched offset of 2494605464226 cycles
Oct  9 10:55:22 compute-0 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 10:55:22 compute-0 kernel: tsc: Detected 2800.000 MHz processor
Oct  9 10:55:22 compute-0 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  9 10:55:22 compute-0 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 10:55:22 compute-0 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 10:55:22 compute-0 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  9 10:55:22 compute-0 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  9 10:55:22 compute-0 kernel: Using GB pages for direct mapping
Oct  9 10:55:22 compute-0 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 10:55:22 compute-0 kernel: ACPI: Early table checksum verification disabled
Oct  9 10:55:22 compute-0 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  9 10:55:22 compute-0 kernel: ACPI: RSDT 0x00000000BFFE16C4 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 10:55:22 compute-0 kernel: ACPI: FACP 0x00000000BFFE1578 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 10:55:22 compute-0 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 10:55:22 compute-0 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  9 10:55:22 compute-0 kernel: ACPI: APIC 0x00000000BFFE15EC 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 10:55:22 compute-0 kernel: ACPI: WAET 0x00000000BFFE169C 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 10:55:22 compute-0 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1578-0xbffe15eb]
Oct  9 10:55:22 compute-0 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1577]
Oct  9 10:55:22 compute-0 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  9 10:55:22 compute-0 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15ec-0xbffe169b]
Oct  9 10:55:22 compute-0 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe169c-0xbffe16c3]
Oct  9 10:55:22 compute-0 kernel: No NUMA configuration found
Oct  9 10:55:22 compute-0 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  9 10:55:22 compute-0 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct  9 10:55:22 compute-0 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  9 10:55:22 compute-0 kernel: Zone ranges:
Oct  9 10:55:22 compute-0 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 10:55:22 compute-0 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 10:55:22 compute-0 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  9 10:55:22 compute-0 kernel:  Device   empty
Oct  9 10:55:22 compute-0 kernel: Movable zone start for each node
Oct  9 10:55:22 compute-0 kernel: Early memory node ranges
Oct  9 10:55:22 compute-0 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 10:55:22 compute-0 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  9 10:55:22 compute-0 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  9 10:55:22 compute-0 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  9 10:55:22 compute-0 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 10:55:22 compute-0 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 10:55:22 compute-0 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 10:55:22 compute-0 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 10:55:22 compute-0 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 10:55:22 compute-0 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 10:55:22 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 10:55:22 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 10:55:22 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 10:55:22 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 10:55:22 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 10:55:22 compute-0 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 10:55:22 compute-0 kernel: TSC deadline timer available
Oct  9 10:55:22 compute-0 kernel: CPU topo: Max. logical packages:   8
Oct  9 10:55:22 compute-0 kernel: CPU topo: Max. logical dies:       8
Oct  9 10:55:22 compute-0 kernel: CPU topo: Max. dies per package:   1
Oct  9 10:55:22 compute-0 kernel: CPU topo: Max. threads per core:   1
Oct  9 10:55:22 compute-0 kernel: CPU topo: Num. cores per package:     1
Oct  9 10:55:22 compute-0 kernel: CPU topo: Num. threads per package:   1
Oct  9 10:55:22 compute-0 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  9 10:55:22 compute-0 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 10:55:22 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 10:55:22 compute-0 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  9 10:55:22 compute-0 kernel: Booting paravirtualized kernel on KVM
Oct  9 10:55:22 compute-0 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 10:55:22 compute-0 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  9 10:55:22 compute-0 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  9 10:55:22 compute-0 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  9 10:55:22 compute-0 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 10:55:22 compute-0 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 10:55:22 compute-0 kernel: random: crng init done
Oct  9 10:55:22 compute-0 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: Fallback order for Node 0: 0 
Oct  9 10:55:22 compute-0 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 10:55:22 compute-0 kernel: Policy zone: Normal
Oct  9 10:55:22 compute-0 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 10:55:22 compute-0 kernel: software IO TLB: area num 8.
Oct  9 10:55:22 compute-0 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  9 10:55:22 compute-0 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 10:55:22 compute-0 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 10:55:22 compute-0 kernel: Dynamic Preempt: voluntary
Oct  9 10:55:22 compute-0 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 10:55:22 compute-0 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 10:55:22 compute-0 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  9 10:55:22 compute-0 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 10:55:22 compute-0 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 10:55:22 compute-0 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 10:55:22 compute-0 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 10:55:22 compute-0 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  9 10:55:22 compute-0 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  9 10:55:22 compute-0 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  9 10:55:22 compute-0 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  9 10:55:22 compute-0 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  9 10:55:22 compute-0 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 10:55:22 compute-0 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 10:55:22 compute-0 kernel: Console: colour VGA+ 80x25
Oct  9 10:55:22 compute-0 kernel: printk: console [ttyS0] enabled
Oct  9 10:55:22 compute-0 kernel: ACPI: Core revision 20230331
Oct  9 10:55:22 compute-0 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 10:55:22 compute-0 kernel: x2apic enabled
Oct  9 10:55:22 compute-0 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 10:55:22 compute-0 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 10:55:22 compute-0 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct  9 10:55:22 compute-0 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 10:55:22 compute-0 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 10:55:22 compute-0 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 10:55:22 compute-0 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 10:55:22 compute-0 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 10:55:22 compute-0 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 10:55:22 compute-0 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  9 10:55:22 compute-0 kernel: RETBleed: Mitigation: untrained return thunk
Oct  9 10:55:22 compute-0 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 10:55:22 compute-0 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 10:55:22 compute-0 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 10:55:22 compute-0 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 10:55:22 compute-0 kernel: x86/bugs: return thunk changed
Oct  9 10:55:22 compute-0 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 10:55:22 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 10:55:22 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 10:55:22 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 10:55:22 compute-0 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 10:55:22 compute-0 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  9 10:55:22 compute-0 kernel: Freeing SMP alternatives memory: 40K
Oct  9 10:55:22 compute-0 kernel: pid_max: default: 32768 minimum: 301
Oct  9 10:55:22 compute-0 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 10:55:22 compute-0 kernel: landlock: Up and running.
Oct  9 10:55:22 compute-0 kernel: Yama: becoming mindful.
Oct  9 10:55:22 compute-0 kernel: SELinux:  Initializing.
Oct  9 10:55:22 compute-0 kernel: LSM support for eBPF active
Oct  9 10:55:22 compute-0 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  9 10:55:22 compute-0 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 10:55:22 compute-0 kernel: ... version:                0
Oct  9 10:55:22 compute-0 kernel: ... bit width:              48
Oct  9 10:55:22 compute-0 kernel: ... generic registers:      6
Oct  9 10:55:22 compute-0 kernel: ... value mask:             0000ffffffffffff
Oct  9 10:55:22 compute-0 kernel: ... max period:             00007fffffffffff
Oct  9 10:55:22 compute-0 kernel: ... fixed-purpose events:   0
Oct  9 10:55:22 compute-0 kernel: ... event mask:             000000000000003f
Oct  9 10:55:22 compute-0 kernel: signal: max sigframe size: 1776
Oct  9 10:55:22 compute-0 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 10:55:22 compute-0 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 10:55:22 compute-0 kernel: smp: Bringing up secondary CPUs ...
Oct  9 10:55:22 compute-0 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 10:55:22 compute-0 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  9 10:55:22 compute-0 kernel: smp: Brought up 1 node, 8 CPUs
Oct  9 10:55:22 compute-0 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct  9 10:55:22 compute-0 kernel: node 0 deferred pages initialised in 29ms
Oct  9 10:55:22 compute-0 kernel: Memory: 7765632K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616508K reserved, 0K cma-reserved)
Oct  9 10:55:22 compute-0 kernel: devtmpfs: initialized
Oct  9 10:55:22 compute-0 kernel: x86/mm: Memory block size: 128MB
Oct  9 10:55:22 compute-0 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 10:55:22 compute-0 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 10:55:22 compute-0 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 10:55:22 compute-0 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 10:55:22 compute-0 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 10:55:22 compute-0 kernel: audit: initializing netlink subsys (disabled)
Oct  9 10:55:22 compute-0 kernel: audit: type=2000 audit(1760007319.387:1): state=initialized audit_enabled=0 res=1
Oct  9 10:55:22 compute-0 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 10:55:22 compute-0 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 10:55:22 compute-0 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 10:55:22 compute-0 kernel: cpuidle: using governor menu
Oct  9 10:55:22 compute-0 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 10:55:22 compute-0 kernel: PCI: Using configuration type 1 for base access
Oct  9 10:55:22 compute-0 kernel: PCI: Using configuration type 1 for extended access
Oct  9 10:55:22 compute-0 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 10:55:22 compute-0 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 10:55:22 compute-0 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 10:55:22 compute-0 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 10:55:22 compute-0 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 10:55:22 compute-0 kernel: Demotion targets for Node 0: null
Oct  9 10:55:22 compute-0 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 10:55:22 compute-0 kernel: ACPI: Added _OSI(Module Device)
Oct  9 10:55:22 compute-0 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 10:55:22 compute-0 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 10:55:22 compute-0 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 10:55:22 compute-0 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 10:55:22 compute-0 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 10:55:22 compute-0 kernel: ACPI: Interpreter enabled
Oct  9 10:55:22 compute-0 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  9 10:55:22 compute-0 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 10:55:22 compute-0 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 10:55:22 compute-0 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 10:55:22 compute-0 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  9 10:55:22 compute-0 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 10:55:22 compute-0 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [3] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [4] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [5] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [6] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [7] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [8] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [9] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [10] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [11] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [12] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [13] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [14] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [15] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [16] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [17] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [18] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [19] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [20] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [21] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [22] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [23] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [24] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [25] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [26] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [27] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [28] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [29] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [30] registered
Oct  9 10:55:22 compute-0 kernel: acpiphp: Slot [31] registered
Oct  9 10:55:22 compute-0 kernel: PCI host bridge to bus 0000:00
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.1: BAR 4 [io  0xc180-0xc18f]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.2: BAR 4 [io  0xc140-0xc15f]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:03.0: ROM [mem 0xfea80000-0xfeafffff pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:06.0: BAR 0 [io  0xc160-0xc17f]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:07.0: BAR 0 [io  0xc100-0xc13f]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfeb93000-0xfeb93fff]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:07.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  9 10:55:22 compute-0 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 10:55:22 compute-0 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 10:55:22 compute-0 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 10:55:22 compute-0 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 10:55:22 compute-0 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  9 10:55:22 compute-0 kernel: iommu: Default domain type: Translated
Oct  9 10:55:22 compute-0 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 10:55:22 compute-0 kernel: SCSI subsystem initialized
Oct  9 10:55:22 compute-0 kernel: ACPI: bus type USB registered
Oct  9 10:55:22 compute-0 kernel: usbcore: registered new interface driver usbfs
Oct  9 10:55:22 compute-0 kernel: usbcore: registered new interface driver hub
Oct  9 10:55:22 compute-0 kernel: usbcore: registered new device driver usb
Oct  9 10:55:22 compute-0 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 10:55:22 compute-0 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 10:55:22 compute-0 kernel: PTP clock support registered
Oct  9 10:55:22 compute-0 kernel: EDAC MC: Ver: 3.0.0
Oct  9 10:55:22 compute-0 kernel: NetLabel: Initializing
Oct  9 10:55:22 compute-0 kernel: NetLabel:  domain hash size = 128
Oct  9 10:55:22 compute-0 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 10:55:22 compute-0 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 10:55:22 compute-0 kernel: PCI: Using ACPI for IRQ routing
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 10:55:22 compute-0 kernel: vgaarb: loaded
Oct  9 10:55:22 compute-0 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 10:55:22 compute-0 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 10:55:22 compute-0 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 10:55:22 compute-0 kernel: pnp: PnP ACPI init
Oct  9 10:55:22 compute-0 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 10:55:22 compute-0 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_INET protocol family
Oct  9 10:55:22 compute-0 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 10:55:22 compute-0 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_XDP protocol family
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  9 10:55:22 compute-0 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  9 10:55:22 compute-0 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  9 10:55:22 compute-0 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 75582 usecs
Oct  9 10:55:22 compute-0 kernel: PCI: CLS 0 bytes, default 64
Oct  9 10:55:22 compute-0 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 10:55:22 compute-0 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  9 10:55:22 compute-0 kernel: ACPI: bus type thunderbolt registered
Oct  9 10:55:22 compute-0 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 10:55:22 compute-0 kernel: Initialise system trusted keyrings
Oct  9 10:55:22 compute-0 kernel: Key type blacklist registered
Oct  9 10:55:22 compute-0 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 10:55:22 compute-0 kernel: zbud: loaded
Oct  9 10:55:22 compute-0 kernel: integrity: Platform Keyring initialized
Oct  9 10:55:22 compute-0 kernel: integrity: Machine keyring initialized
Oct  9 10:55:22 compute-0 kernel: Freeing initrd memory: 86104K
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_ALG protocol family
Oct  9 10:55:22 compute-0 kernel: xor: automatically using best checksumming function   avx       
Oct  9 10:55:22 compute-0 kernel: Key type asymmetric registered
Oct  9 10:55:22 compute-0 kernel: Asymmetric key parser 'x509' registered
Oct  9 10:55:22 compute-0 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 10:55:22 compute-0 kernel: io scheduler mq-deadline registered
Oct  9 10:55:22 compute-0 kernel: io scheduler kyber registered
Oct  9 10:55:22 compute-0 kernel: io scheduler bfq registered
Oct  9 10:55:22 compute-0 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 10:55:22 compute-0 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 10:55:22 compute-0 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 10:55:22 compute-0 kernel: ACPI: button: Power Button [PWRF]
Oct  9 10:55:22 compute-0 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  9 10:55:22 compute-0 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  9 10:55:22 compute-0 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  9 10:55:22 compute-0 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 10:55:22 compute-0 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 10:55:22 compute-0 kernel: Non-volatile memory driver v1.3
Oct  9 10:55:22 compute-0 kernel: rdac: device handler registered
Oct  9 10:55:22 compute-0 kernel: hp_sw: device handler registered
Oct  9 10:55:22 compute-0 kernel: emc: device handler registered
Oct  9 10:55:22 compute-0 kernel: alua: device handler registered
Oct  9 10:55:22 compute-0 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  9 10:55:22 compute-0 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  9 10:55:22 compute-0 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  9 10:55:22 compute-0 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c140
Oct  9 10:55:22 compute-0 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 10:55:22 compute-0 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 10:55:22 compute-0 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 10:55:22 compute-0 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 10:55:22 compute-0 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  9 10:55:22 compute-0 kernel: hub 1-0:1.0: USB hub found
Oct  9 10:55:22 compute-0 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 10:55:22 compute-0 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 10:55:22 compute-0 kernel: usbserial: USB Serial support registered for generic
Oct  9 10:55:22 compute-0 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 10:55:22 compute-0 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 10:55:22 compute-0 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 10:55:22 compute-0 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 10:55:22 compute-0 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  9 10:55:22 compute-0 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 10:55:22 compute-0 kernel: rtc_cmos 00:04: registered as rtc0
Oct  9 10:55:22 compute-0 kernel: rtc_cmos 00:04: setting system clock to 2025-10-09T10:55:21 UTC (1760007321)
Oct  9 10:55:22 compute-0 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  9 10:55:22 compute-0 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 10:55:22 compute-0 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 10:55:22 compute-0 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 10:55:22 compute-0 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 10:55:22 compute-0 kernel: usbcore: registered new interface driver usbhid
Oct  9 10:55:22 compute-0 kernel: usbhid: USB HID core driver
Oct  9 10:55:22 compute-0 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 10:55:22 compute-0 kernel: Initializing XFRM netlink socket
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_INET6 protocol family
Oct  9 10:55:22 compute-0 kernel: Segment Routing with IPv6
Oct  9 10:55:22 compute-0 kernel: NET: Registered PF_PACKET protocol family
Oct  9 10:55:22 compute-0 kernel: mpls_gso: MPLS GSO support
Oct  9 10:55:22 compute-0 kernel: IPI shorthand broadcast: enabled
Oct  9 10:55:22 compute-0 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 10:55:22 compute-0 kernel: AES CTR mode by8 optimization enabled
Oct  9 10:55:22 compute-0 kernel: sched_clock: Marking stable (1179001782, 144201229)->(1427436340, -104233329)
Oct  9 10:55:22 compute-0 kernel: registered taskstats version 1
Oct  9 10:55:22 compute-0 kernel: Loading compiled-in X.509 certificates
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 10:55:22 compute-0 kernel: Demotion targets for Node 0: null
Oct  9 10:55:22 compute-0 kernel: page_owner is disabled
Oct  9 10:55:22 compute-0 kernel: Key type .fscrypt registered
Oct  9 10:55:22 compute-0 kernel: Key type fscrypt-provisioning registered
Oct  9 10:55:22 compute-0 kernel: Key type big_key registered
Oct  9 10:55:22 compute-0 kernel: Key type encrypted registered
Oct  9 10:55:22 compute-0 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 10:55:22 compute-0 kernel: Loading compiled-in module X.509 certificates
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 10:55:22 compute-0 kernel: ima: Allocated hash algorithm: sha256
Oct  9 10:55:22 compute-0 kernel: ima: No architecture policies found
Oct  9 10:55:22 compute-0 kernel: evm: Initialising EVM extended attributes:
Oct  9 10:55:22 compute-0 kernel: evm: security.selinux
Oct  9 10:55:22 compute-0 kernel: evm: security.SMACK64 (disabled)
Oct  9 10:55:22 compute-0 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 10:55:22 compute-0 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 10:55:22 compute-0 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 10:55:22 compute-0 kernel: evm: security.apparmor (disabled)
Oct  9 10:55:22 compute-0 kernel: evm: security.ima
Oct  9 10:55:22 compute-0 kernel: evm: security.capability
Oct  9 10:55:22 compute-0 kernel: evm: HMAC attrs: 0x1
Oct  9 10:55:22 compute-0 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 10:55:22 compute-0 kernel: Running certificate verification RSA selftest
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 10:55:22 compute-0 kernel: Running certificate verification ECDSA selftest
Oct  9 10:55:22 compute-0 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 10:55:22 compute-0 kernel: clk: Disabling unused clocks
Oct  9 10:55:22 compute-0 kernel: Freeing unused decrypted memory: 2028K
Oct  9 10:55:22 compute-0 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 10:55:22 compute-0 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 10:55:22 compute-0 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 10:55:22 compute-0 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 10:55:22 compute-0 kernel: Run /init as init process
Oct  9 10:55:22 compute-0 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 10:55:22 compute-0 systemd: Detected virtualization kvm.
Oct  9 10:55:22 compute-0 systemd: Detected architecture x86-64.
Oct  9 10:55:22 compute-0 systemd: Running in initrd.
Oct  9 10:55:22 compute-0 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 10:55:22 compute-0 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 10:55:22 compute-0 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 10:55:22 compute-0 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 10:55:22 compute-0 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  9 10:55:22 compute-0 systemd: No hostname configured, using default hostname.
Oct  9 10:55:22 compute-0 systemd: Hostname set to <localhost>.
Oct  9 10:55:22 compute-0 systemd: Initializing machine ID from VM UUID.
Oct  9 10:55:22 compute-0 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 10:55:22 compute-0 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  9 10:55:22 compute-0 systemd: Queued start job for default target Initrd Default Target.
Oct  9 10:55:22 compute-0 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 10:55:22 compute-0 systemd: Reached target Local Encrypted Volumes.
Oct  9 10:55:22 compute-0 systemd: Reached target Initrd /usr File System.
Oct  9 10:55:22 compute-0 systemd: Reached target Local File Systems.
Oct  9 10:55:22 compute-0 systemd: Reached target Path Units.
Oct  9 10:55:22 compute-0 systemd: Reached target Slice Units.
Oct  9 10:55:22 compute-0 systemd: Reached target Swaps.
Oct  9 10:55:22 compute-0 systemd: Reached target Timer Units.
Oct  9 10:55:22 compute-0 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 10:55:22 compute-0 systemd: Listening on Journal Socket (/dev/log).
Oct  9 10:55:22 compute-0 systemd: Listening on Journal Socket.
Oct  9 10:55:22 compute-0 systemd: Listening on udev Control Socket.
Oct  9 10:55:22 compute-0 systemd: Listening on udev Kernel Socket.
Oct  9 10:55:22 compute-0 systemd: Reached target Socket Units.
Oct  9 10:55:22 compute-0 systemd: Starting Create List of Static Device Nodes...
Oct  9 10:55:22 compute-0 systemd: Starting Journal Service...
Oct  9 10:55:22 compute-0 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 10:55:22 compute-0 systemd: Starting Apply Kernel Variables...
Oct  9 10:55:22 compute-0 systemd: Starting Create System Users...
Oct  9 10:55:22 compute-0 systemd: Starting Setup Virtual Console...
Oct  9 10:55:22 compute-0 systemd: Finished Create List of Static Device Nodes.
Oct  9 10:55:22 compute-0 systemd: Finished Apply Kernel Variables.
Oct  9 10:55:22 compute-0 systemd: Finished Create System Users.
Oct  9 10:55:22 compute-0 systemd-journald[306]: Journal started
Oct  9 10:55:22 compute-0 systemd-journald[306]: Runtime Journal (/run/log/journal/8e0946a45a4143b5afdedf78fe78e002) is 8.0M, max 153.5M, 145.5M free.
Oct  9 10:55:22 compute-0 systemd-sysusers[310]: Creating group 'users' with GID 100.
Oct  9 10:55:22 compute-0 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Oct  9 10:55:22 compute-0 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 10:55:22 compute-0 systemd: Started Journal Service.
Oct  9 10:55:22 compute-0 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 10:55:22 compute-0 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 10:55:22 compute-0 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 10:55:22 compute-0 systemd[1]: Finished Setup Virtual Console.
Oct  9 10:55:22 compute-0 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 10:55:22 compute-0 systemd[1]: Starting dracut cmdline hook...
Oct  9 10:55:22 compute-0 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 10:55:22 compute-0 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 10:55:22 compute-0 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 10:55:22 compute-0 systemd[1]: Finished dracut cmdline hook.
Oct  9 10:55:22 compute-0 systemd[1]: Starting dracut pre-udev hook...
Oct  9 10:55:22 compute-0 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 10:55:22 compute-0 kernel: device-mapper: uevent: version 1.0.3
Oct  9 10:55:22 compute-0 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 10:55:22 compute-0 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 10:55:22 compute-0 kernel: RPC: Registered udp transport module.
Oct  9 10:55:22 compute-0 kernel: RPC: Registered tcp transport module.
Oct  9 10:55:22 compute-0 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 10:55:22 compute-0 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 10:55:22 compute-0 rpc.statd[442]: Version 2.5.4 starting
Oct  9 10:55:22 compute-0 rpc.statd[442]: Initializing NSM state
Oct  9 10:55:22 compute-0 rpc.idmapd[447]: Setting log level to 0
Oct  9 10:55:22 compute-0 systemd[1]: Finished dracut pre-udev hook.
Oct  9 10:55:22 compute-0 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 10:55:22 compute-0 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 10:55:22 compute-0 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 10:55:22 compute-0 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 10:55:22 compute-0 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 10:55:22 compute-0 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 10:55:22 compute-0 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 10:55:22 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 10:55:22 compute-0 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 10:55:22 compute-0 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 10:55:22 compute-0 systemd[1]: Reached target Network.
Oct  9 10:55:22 compute-0 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 10:55:22 compute-0 systemd[1]: Starting dracut initqueue hook...
Oct  9 10:55:22 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 10:55:22 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 10:55:22 compute-0 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 10:55:22 compute-0 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 10:55:22 compute-0 systemd[1]: Reached target System Initialization.
Oct  9 10:55:22 compute-0 systemd[1]: Reached target Basic System.
Oct  9 10:55:22 compute-0 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  9 10:55:22 compute-0 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 10:55:22 compute-0 kernel: vda: vda1
Oct  9 10:55:22 compute-0 kernel: scsi host0: ata_piix
Oct  9 10:55:22 compute-0 kernel: scsi host1: ata_piix
Oct  9 10:55:22 compute-0 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc180 irq 14 lpm-pol 0
Oct  9 10:55:22 compute-0 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc188 irq 15 lpm-pol 0
Oct  9 10:55:23 compute-0 systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:55:23 compute-0 systemd-udevd[504]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:55:23 compute-0 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Initrd Root Device.
Oct  9 10:55:23 compute-0 kernel: ata1: found unknown device (class 0)
Oct  9 10:55:23 compute-0 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 10:55:23 compute-0 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 10:55:23 compute-0 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 10:55:23 compute-0 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 10:55:23 compute-0 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 10:55:23 compute-0 systemd[1]: Finished dracut initqueue hook.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Remote File Systems.
Oct  9 10:55:23 compute-0 systemd[1]: Starting dracut pre-mount hook...
Oct  9 10:55:23 compute-0 systemd[1]: Finished dracut pre-mount hook.
Oct  9 10:55:23 compute-0 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 10:55:23 compute-0 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 10:55:23 compute-0 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 10:55:23 compute-0 systemd[1]: Mounting /sysroot...
Oct  9 10:55:23 compute-0 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 10:55:23 compute-0 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 10:55:23 compute-0 kernel: XFS (vda1): Ending clean mount
Oct  9 10:55:23 compute-0 systemd[1]: Mounted /sysroot.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Initrd Root File System.
Oct  9 10:55:23 compute-0 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 10:55:23 compute-0 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 10:55:23 compute-0 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Initrd File Systems.
Oct  9 10:55:23 compute-0 systemd[1]: Reached target Initrd Default Target.
Oct  9 10:55:23 compute-0 systemd[1]: Starting dracut mount hook...
Oct  9 10:55:23 compute-0 systemd[1]: Finished dracut mount hook.
Oct  9 10:55:23 compute-0 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 10:55:24 compute-0 rpc.idmapd[447]: exiting on signal 15
Oct  9 10:55:24 compute-0 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 10:55:24 compute-0 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Network.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Timer Units.
Oct  9 10:55:24 compute-0 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Initrd Default Target.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Basic System.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Initrd Root Device.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Path Units.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Remote File Systems.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Slice Units.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Socket Units.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target System Initialization.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Local File Systems.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Swaps.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut mount hook.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut initqueue hook.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 10:55:24 compute-0 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 10:55:24 compute-0 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Setup Virtual Console.
Oct  9 10:55:24 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 10:55:24 compute-0 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Closed udev Control Socket.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Closed udev Kernel Socket.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 10:55:24 compute-0 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped dracut cmdline hook.
Oct  9 10:55:24 compute-0 systemd[1]: Starting Cleanup udev Database...
Oct  9 10:55:24 compute-0 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 10:55:24 compute-0 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 10:55:24 compute-0 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Stopped Create System Users.
Oct  9 10:55:24 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 10:55:24 compute-0 systemd[1]: Finished Cleanup udev Database.
Oct  9 10:55:24 compute-0 systemd[1]: Reached target Switch Root.
Oct  9 10:55:24 compute-0 systemd[1]: Starting Switch Root...
Oct  9 10:55:24 compute-0 systemd[1]: Switching root.
Oct  9 10:55:24 compute-0 systemd-journald[306]: Journal stopped
Oct  9 10:55:25 compute-0 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  9 10:55:25 compute-0 kernel: audit: type=1404 audit(1760007324.321:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 10:55:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 10:55:25 compute-0 kernel: audit: type=1403 audit(1760007324.468:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 10:55:25 compute-0 systemd: Successfully loaded SELinux policy in 151.494ms.
Oct  9 10:55:25 compute-0 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.551ms.
Oct  9 10:55:25 compute-0 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 10:55:25 compute-0 systemd: Detected virtualization kvm.
Oct  9 10:55:25 compute-0 systemd: Detected architecture x86-64.
Oct  9 10:55:25 compute-0 systemd: Hostname set to <compute-0>.
Oct  9 10:55:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:55:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:55:25 compute-0 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd: Stopped Switch Root.
Oct  9 10:55:25 compute-0 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 10:55:25 compute-0 systemd: Created slice Slice /system/getty.
Oct  9 10:55:25 compute-0 systemd: Created slice Slice /system/serial-getty.
Oct  9 10:55:25 compute-0 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 10:55:25 compute-0 systemd: Created slice User and Session Slice.
Oct  9 10:55:25 compute-0 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 10:55:25 compute-0 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 10:55:25 compute-0 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 10:55:25 compute-0 systemd: Reached target Local Encrypted Volumes.
Oct  9 10:55:25 compute-0 systemd: Stopped target Switch Root.
Oct  9 10:55:25 compute-0 systemd: Stopped target Initrd File Systems.
Oct  9 10:55:25 compute-0 systemd: Stopped target Initrd Root File System.
Oct  9 10:55:25 compute-0 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 10:55:25 compute-0 systemd: Reached target Path Units.
Oct  9 10:55:25 compute-0 systemd: Reached target rpc_pipefs.target.
Oct  9 10:55:25 compute-0 systemd: Reached target Slice Units.
Oct  9 10:55:25 compute-0 systemd: Reached target Local Verity Protected Volumes.
Oct  9 10:55:25 compute-0 systemd: Listening on Device-mapper event daemon FIFOs.
Oct  9 10:55:25 compute-0 systemd: Listening on LVM2 poll daemon socket.
Oct  9 10:55:25 compute-0 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 10:55:25 compute-0 systemd: Reached target RPC Port Mapper.
Oct  9 10:55:25 compute-0 systemd: Listening on Process Core Dump Socket.
Oct  9 10:55:25 compute-0 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 10:55:25 compute-0 systemd: Listening on udev Control Socket.
Oct  9 10:55:25 compute-0 systemd: Listening on udev Kernel Socket.
Oct  9 10:55:25 compute-0 systemd: Mounting Huge Pages File System...
Oct  9 10:55:25 compute-0 systemd: Mounting /dev/hugepages1G...
Oct  9 10:55:25 compute-0 systemd: Mounting /dev/hugepages2M...
Oct  9 10:55:25 compute-0 systemd: Mounting POSIX Message Queue File System...
Oct  9 10:55:25 compute-0 systemd: Mounting Kernel Debug File System...
Oct  9 10:55:25 compute-0 systemd: Mounting Kernel Trace File System...
Oct  9 10:55:25 compute-0 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 10:55:25 compute-0 systemd: Starting Create List of Static Device Nodes...
Oct  9 10:55:25 compute-0 systemd: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 systemd: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 10:55:25 compute-0 systemd: Starting Load Kernel Module configfs...
Oct  9 10:55:25 compute-0 systemd: Starting Load Kernel Module drm...
Oct  9 10:55:25 compute-0 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 10:55:25 compute-0 systemd: Starting Load Kernel Module fuse...
Oct  9 10:55:25 compute-0 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 10:55:25 compute-0 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd: Stopped File System Check on Root Device.
Oct  9 10:55:25 compute-0 systemd: Stopped Journal Service.
Oct  9 10:55:25 compute-0 systemd: Starting Journal Service...
Oct  9 10:55:25 compute-0 systemd: Starting Load Kernel Modules...
Oct  9 10:55:25 compute-0 kernel: fuse: init (API version 7.37)
Oct  9 10:55:25 compute-0 systemd: Starting Generate network units from Kernel command line...
Oct  9 10:55:25 compute-0 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 10:55:25 compute-0 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 10:55:25 compute-0 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 systemd: Starting Coldplug All udev Devices...
Oct  9 10:55:25 compute-0 systemd-journald[690]: Journal started
Oct  9 10:55:25 compute-0 systemd-journald[690]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  9 10:55:25 compute-0 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 10:55:25 compute-0 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd: Started Journal Service.
Oct  9 10:55:25 compute-0 systemd[1]: Mounted Huge Pages File System.
Oct  9 10:55:25 compute-0 systemd[1]: Mounted /dev/hugepages1G.
Oct  9 10:55:25 compute-0 systemd[1]: Mounted /dev/hugepages2M.
Oct  9 10:55:25 compute-0 systemd[1]: Mounted POSIX Message Queue File System.
Oct  9 10:55:25 compute-0 systemd[1]: Mounted Kernel Debug File System.
Oct  9 10:55:25 compute-0 systemd[1]: Mounted Kernel Trace File System.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Create List of Static Device Nodes.
Oct  9 10:55:25 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 10:55:25 compute-0 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 10:55:25 compute-0 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 10:55:25 compute-0 kernel: ACPI: bus type drm_connector registered
Oct  9 10:55:25 compute-0 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 10:55:25 compute-0 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load Kernel Module drm.
Oct  9 10:55:25 compute-0 systemd[1]: Mounting FUSE Control File System...
Oct  9 10:55:25 compute-0 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 10:55:25 compute-0 systemd[1]: Mounted FUSE Control File System.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 10:55:25 compute-0 systemd[1]: Activating swap /swap...
Oct  9 10:55:25 compute-0 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 10:55:25 compute-0 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct  9 10:55:25 compute-0 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 10:55:25 compute-0 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 10:55:25 compute-0 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 10:55:25 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 10:55:25 compute-0 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 10:55:25 compute-0 systemd-journald[690]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 8.089ms for 775 entries.
Oct  9 10:55:25 compute-0 systemd-journald[690]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct  9 10:55:25 compute-0 systemd-journald[690]: Received client request to flush runtime journal.
Oct  9 10:55:25 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 10:55:25 compute-0 kernel: Bridge firewalling registered
Oct  9 10:55:25 compute-0 systemd[1]: Activated swap /swap.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 10:55:25 compute-0 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 10:55:25 compute-0 systemd[1]: Reached target Swaps.
Oct  9 10:55:25 compute-0 systemd-modules-load[691]: Inserted module 'br_netfilter'
Oct  9 10:55:25 compute-0 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 10:55:25 compute-0 systemd-modules-load[691]: Inserted module 'nf_conntrack'
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct  9 10:55:25 compute-0 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 10:55:25 compute-0 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 10:55:25 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 10:55:25 compute-0 systemd[1]: Reached target Local File Systems.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Import network configuration from initramfs...
Oct  9 10:55:25 compute-0 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 10:55:25 compute-0 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 10:55:25 compute-0 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 10:55:25 compute-0 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 10:55:25 compute-0 bootctl[709]: Couldn't find EFI system partition, skipping.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Import network configuration from initramfs.
Oct  9 10:55:25 compute-0 systemd-udevd[711]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 10:55:25 compute-0 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 10:55:25 compute-0 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 10:55:25 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 10:55:25 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 10:55:25 compute-0 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 10:55:25 compute-0 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  9 10:55:25 compute-0 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 10:55:25 compute-0 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 10:55:25 compute-0 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 10:55:25 compute-0 systemd-udevd[731]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Security Auditing Service...
Oct  9 10:55:25 compute-0 systemd[1]: Starting RPC Bind...
Oct  9 10:55:25 compute-0 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct  9 10:55:25 compute-0 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct  9 10:55:25 compute-0 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  9 10:55:25 compute-0 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  9 10:55:25 compute-0 auditd[776]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 10:55:25 compute-0 auditd[776]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 10:55:25 compute-0 kernel: Console: switching to colour dummy device 80x25
Oct  9 10:55:25 compute-0 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 10:55:25 compute-0 kernel: [drm] features: -context_init
Oct  9 10:55:25 compute-0 kernel: [drm] number of scanouts: 1
Oct  9 10:55:25 compute-0 kernel: [drm] number of cap sets: 0
Oct  9 10:55:25 compute-0 systemd[1]: Started RPC Bind.
Oct  9 10:55:25 compute-0 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  9 10:55:25 compute-0 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 10:55:25 compute-0 kernel: Console: switching to colour frame buffer device 128x48
Oct  9 10:55:25 compute-0 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 10:55:25 compute-0 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:55:25 compute-0 augenrules[781]: /sbin/augenrules: No change
Oct  9 10:55:25 compute-0 augenrules[806]: No rules
Oct  9 10:55:25 compute-0 augenrules[806]: enabled 1
Oct  9 10:55:25 compute-0 augenrules[806]: failure 1
Oct  9 10:55:25 compute-0 augenrules[806]: pid 776
Oct  9 10:55:25 compute-0 augenrules[806]: rate_limit 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_limit 8192
Oct  9 10:55:25 compute-0 augenrules[806]: lost 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog 1
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_wait_time 60000
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_wait_time_actual 0
Oct  9 10:55:25 compute-0 augenrules[806]: enabled 1
Oct  9 10:55:25 compute-0 augenrules[806]: failure 1
Oct  9 10:55:25 compute-0 augenrules[806]: pid 776
Oct  9 10:55:25 compute-0 augenrules[806]: rate_limit 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_limit 8192
Oct  9 10:55:25 compute-0 augenrules[806]: lost 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_wait_time 60000
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_wait_time_actual 0
Oct  9 10:55:25 compute-0 augenrules[806]: enabled 1
Oct  9 10:55:25 compute-0 augenrules[806]: failure 1
Oct  9 10:55:25 compute-0 augenrules[806]: pid 776
Oct  9 10:55:25 compute-0 augenrules[806]: rate_limit 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_limit 8192
Oct  9 10:55:25 compute-0 augenrules[806]: lost 0
Oct  9 10:55:25 compute-0 augenrules[806]: backlog 4
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_wait_time 60000
Oct  9 10:55:25 compute-0 augenrules[806]: backlog_wait_time_actual 0
Oct  9 10:55:25 compute-0 systemd[1]: Started Security Auditing Service.
Oct  9 10:55:25 compute-0 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 10:55:25 compute-0 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 10:55:25 compute-0 kernel: kvm_amd: TSC scaling supported
Oct  9 10:55:25 compute-0 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 10:55:25 compute-0 kernel: kvm_amd: Nested Paging enabled
Oct  9 10:55:25 compute-0 kernel: kvm_amd: LBR virtualization supported
Oct  9 10:55:26 compute-0 systemd[1]: Reached target System Initialization.
Oct  9 10:55:26 compute-0 systemd[1]: Started dnf makecache --timer.
Oct  9 10:55:26 compute-0 systemd[1]: Started Daily rotation of log files.
Oct  9 10:55:26 compute-0 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct  9 10:55:26 compute-0 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct  9 10:55:26 compute-0 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 10:55:26 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 10:55:26 compute-0 systemd[1]: Reached target Timer Units.
Oct  9 10:55:26 compute-0 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 10:55:26 compute-0 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 10:55:26 compute-0 systemd[1]: Reached target Socket Units.
Oct  9 10:55:26 compute-0 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 10:55:26 compute-0 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 10:55:26 compute-0 systemd[1]: Started D-Bus System Message Bus.
Oct  9 10:55:26 compute-0 systemd[1]: Reached target Basic System.
Oct  9 10:55:26 compute-0 dbus-broker-lau[837]: Ready
Oct  9 10:55:26 compute-0 systemd[1]: Starting NTP client/server...
Oct  9 10:55:26 compute-0 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 10:55:26 compute-0 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 10:55:26 compute-0 systemd[1]: Started irqbalance daemon.
Oct  9 10:55:26 compute-0 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 10:55:26 compute-0 systemd[1]: Starting Create netns directory...
Oct  9 10:55:26 compute-0 systemd[1]: Starting Netfilter Tables...
Oct  9 10:55:26 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 10:55:26 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 10:55:26 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 10:55:26 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct  9 10:55:26 compute-0 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 10:55:26 compute-0 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 10:55:26 compute-0 systemd[1]: Starting Resets System Activity Logs...
Oct  9 10:55:26 compute-0 systemd[1]: Starting User Login Management...
Oct  9 10:55:26 compute-0 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 10:55:26 compute-0 systemd[1]: Finished Resets System Activity Logs.
Oct  9 10:55:26 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 10:55:26 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 10:55:26 compute-0 systemd[1]: Finished Create netns directory.
Oct  9 10:55:26 compute-0 systemd-logind[846]: New seat seat0.
Oct  9 10:55:26 compute-0 systemd-logind[846]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 10:55:26 compute-0 systemd-logind[846]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 10:55:26 compute-0 systemd[1]: Started User Login Management.
Oct  9 10:55:26 compute-0 chronyd[853]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 10:55:26 compute-0 chronyd[853]: Frequency -32.386 +/- 0.117 ppm read from /var/lib/chrony/drift
Oct  9 10:55:26 compute-0 chronyd[853]: Loaded seccomp filter (level 2)
Oct  9 10:55:26 compute-0 systemd[1]: Started NTP client/server.
Oct  9 10:55:26 compute-0 systemd[1]: Finished Netfilter Tables.
Oct  9 10:55:26 compute-0 cloud-init[873]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 10:55:26 +0000. Up 6.45 seconds.
Oct  9 10:55:26 compute-0 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 10:55:26 compute-0 systemd[1]: Reached target Preparation for Network.
Oct  9 10:55:26 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 10:55:26 compute-0 chown[875]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 10:55:27 compute-0 ovs-ctl[880]: Starting ovsdb-server [  OK  ]
Oct  9 10:55:27 compute-0 ovs-vsctl[929]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 10:55:27 compute-0 ovs-vsctl[939]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d7fc944c-987d-4684-8e2b-75d871ca0238\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 10:55:27 compute-0 ovs-ctl[880]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 10:55:27 compute-0 ovs-vsctl[944]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 10:55:27 compute-0 ovs-ctl[880]: Enabling remote OVSDB managers [  OK  ]
Oct  9 10:55:27 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 10:55:27 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 10:55:27 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 10:55:27 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 10:55:27 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 10:55:27 compute-0 ovs-ctl[989]: Inserting openvswitch module [  OK  ]
Oct  9 10:55:27 compute-0 kernel: ovs-system: entered promiscuous mode
Oct  9 10:55:27 compute-0 systemd-udevd[725]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:55:27 compute-0 kernel: Timeout policy base is empty
Oct  9 10:55:27 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 10:55:27 compute-0 kernel: vlan22: entered promiscuous mode
Oct  9 10:55:27 compute-0 kernel: vlan21: entered promiscuous mode
Oct  9 10:55:27 compute-0 systemd-udevd[762]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:55:27 compute-0 kernel: vlan23: entered promiscuous mode
Oct  9 10:55:27 compute-0 kernel: vlan20: entered promiscuous mode
Oct  9 10:55:27 compute-0 ovs-ctl[958]: Starting ovs-vswitchd [  OK  ]
Oct  9 10:55:27 compute-0 ovs-vsctl[1033]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 10:55:27 compute-0 ovs-ctl[958]: Enabling remote OVSDB managers [  OK  ]
Oct  9 10:55:27 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 10:55:27 compute-0 systemd[1]: Starting Open vSwitch...
Oct  9 10:55:27 compute-0 systemd[1]: Finished Open vSwitch.
Oct  9 10:55:27 compute-0 systemd[1]: Starting Network Manager...
Oct  9 10:55:27 compute-0 NetworkManager[1036]: <info>  [1760007327.9077] NetworkManager (version 1.54.1-1.el9) is starting... (boot:9e66b594-4f64-4e7a-8e20-acbe68b77d07)
Oct  9 10:55:27 compute-0 NetworkManager[1036]: <info>  [1760007327.9082] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct  9 10:55:27 compute-0 NetworkManager[1036]: <info>  [1760007327.9219] manager[0x557ac56ef040]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 10:55:27 compute-0 systemd[1]: Starting Hostname Service...
Oct  9 10:55:28 compute-0 systemd[1]: Started Hostname Service.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0067] hostname: hostname: using hostnamed
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0068] hostname: static hostname changed from (none) to "compute-0"
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0076] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0174] manager[0x557ac56ef040]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0174] manager[0x557ac56ef040]: rfkill: WWAN hardware radio set enabled
Oct  9 10:55:28 compute-0 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0272] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0295] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0296] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0296] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0297] manager: Networking is enabled by state file
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0304] settings: Loaded settings plugin: keyfile (internal)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0349] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 10:55:28 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0456] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0484] dhcp: init: Using DHCP client 'internal'
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0487] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0499] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0510] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0517] device (lo): Activation: starting connection 'lo' (40d525bc-a2c2-4fb3-82cc-3606eba57f74)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0526] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0529] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0567] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0570] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0585] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0589] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0604] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 10:55:28 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0607] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0621] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0625] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0642] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0649] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0667] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0670] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0675] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0676] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0681] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0683] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0689] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0690] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0696] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0698] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0703] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0705] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0710] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0713] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0718] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0720] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 systemd[1]: Started Network Manager.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0731] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0739] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0741] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0743] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Network.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0745] device (eth0): carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0747] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0748] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0749] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0750] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0752] device (eth1): carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0759] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 10:55:28 compute-0 kernel: vlan21: left promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0774] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0787] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0790] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0792] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0796] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0800] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0805] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0807] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0809] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0811] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0813] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0815] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0817] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0826] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 10:55:28 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0828] policy: auto-activating connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0828] policy: auto-activating connection 'br-ex-port' (1827b2a2-a598-410c-876f-6a34fb846274)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0829] policy: auto-activating connection 'vlan23-port' (35e3f4f0-1d07-4ee8-ad64-4b26247f6f22)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0830] policy: auto-activating connection 'vlan20-port' (67dfd2b5-baec-45dc-b86e-4fdc073630f2)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0830] policy: auto-activating connection 'vlan21-port' (7306f314-0e92-4506-b51c-f774deaa7421)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0831] policy: auto-activating connection 'vlan22-port' (a393cd30-d2f6-426d-abbc-fb10a3e55456)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0831] policy: auto-activating connection 'eth1-port' (b62c0c8b-048b-4194-8c6b-12421f5f8225)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0832] policy: auto-activating connection 'br-ex-br' (bcc33efc-3b3b-435c-aca9-5aaebd912cbc)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0835] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0838] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0842] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0844] device (eth1): Activation: starting connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0846] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1827b2a2-a598-410c-876f-6a34fb846274)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0848] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (35e3f4f0-1d07-4ee8-ad64-4b26247f6f22)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0850] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (67dfd2b5-baec-45dc-b86e-4fdc073630f2)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0852] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7306f314-0e92-4506-b51c-f774deaa7421)
Oct  9 10:55:28 compute-0 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0854] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a393cd30-d2f6-426d-abbc-fb10a3e55456)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0857] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b62c0c8b-048b-4194-8c6b-12421f5f8225)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0861] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (bcc33efc-3b3b-435c-aca9-5aaebd912cbc)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0862] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 10:55:28 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0907] device (lo): Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0917] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0920] manager: NetworkManager state is now CONNECTING
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0921] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct  9 10:55:28 compute-0 kernel: vlan22: left promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0972] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0979] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0982] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0989] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.0996] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1001] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1005] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1010] device (eth1): disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1011] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1013] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1022] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1026] device (eth1)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1037] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1039] device (vlan20)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1047] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1048] device (vlan21)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: vlan20: left promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1056] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1056] device (vlan22)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1069] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1071] device (vlan23)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1079] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1081] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1085] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1089] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1141] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1150] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1154] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1163] device (eth1): disconnecting for new activation request.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1167] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1173] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1179] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1197] device (eth1): Activation: starting connection 'ci-private-network' (9db59092-af47-5628-a1ce-922d34723c71)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1202] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1210] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1827b2a2-a598-410c-876f-6a34fb846274)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1213] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1218] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b62c0c8b-048b-4194-8c6b-12421f5f8225)
Oct  9 10:55:28 compute-0 systemd[1]: Reached target NFS client services.
Oct  9 10:55:28 compute-0 kernel: vlan23: left promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1223] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1229] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (67dfd2b5-baec-45dc-b86e-4fdc073630f2)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1232] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1237] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7306f314-0e92-4506-b51c-f774deaa7421)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1239] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1241] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a393cd30-d2f6-426d-abbc-fb10a3e55456)
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1243] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1259] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (35e3f4f0-1d07-4ee8-ad64-4b26247f6f22)
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Remote File Systems.
Oct  9 10:55:28 compute-0 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1314] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1318] policy: auto-activating connection 'vlan22-if' (3aadbb0c-9529-4d30-8cb0-3397cce8a89a)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1325] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 10:55:28 compute-0 kernel: ovs-system: left promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1357] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1372] policy: auto-activating connection 'vlan20-if' (3e649017-d999-4000-a1a8-779dad9db729)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1381] policy: auto-activating connection 'vlan21-if' (e6ab45b5-ed77-42ca-af9e-b8d856cd2793)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1383] policy: auto-activating connection 'vlan23-if' (3a96d9fc-e83a-47d6-97a1-cf40c6ea5040)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1386] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1400] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1408] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1412] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1416] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1419] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1427] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1430] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1433] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1436] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1443] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1447] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1450] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1454] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1460] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1463] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1466] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1468] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1474] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1477] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1479] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1482] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1489] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1492] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1493] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1502] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1512] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1524] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1530] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1538] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (3aadbb0c-9529-4d30-8cb0-3397cce8a89a)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1541] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1548] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1555] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3e649017-d999-4000-a1a8-779dad9db729)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1558] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1565] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1573] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e6ab45b5-ed77-42ca-af9e-b8d856cd2793)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1574] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1581] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1588] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1594] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1602] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1610] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1617] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1625] policy: auto-activating connection 'vlan23-if' (3a96d9fc-e83a-47d6-97a1-cf40c6ea5040)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1628] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1630] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1634] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1642] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1649] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1658] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1662] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1669] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1672] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1677] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: ovs-system: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1683] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1699] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1701] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1706] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1711] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1714] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: No such timeout policy "ovs_test_tp"
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1718] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (3a96d9fc-e83a-47d6-97a1-cf40c6ea5040)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1718] policy: auto-activating connection 'br-ex-if' (c1e9250a-b28c-492c-8b7c-9cebcf7f3092)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1720] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1725] device (eth0): Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1729] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1731] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1732] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1733] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1734] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1735] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1736] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1739] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1742] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1745] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1746] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1751] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c1e9250a-b28c-492c-8b7c-9cebcf7f3092)
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1751] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1754] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1756] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1762] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1765] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1767] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1770] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1773] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1776] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1779] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1782] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1786] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1791] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1795] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1799] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1800] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1805] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 10:55:28 compute-0 kernel: vlan22: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1870] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1878] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 10:55:28 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1889] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1895] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1907] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1911] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1917] device (eth1): Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1927] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1937] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: vlan20: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1978] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1980] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.1987] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 kernel: vlan21: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2047] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2061] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2098] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2101] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2109] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2122] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2133] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: br-ex: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2185] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2187] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 kernel: vlan23: entered promiscuous mode
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2194] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2589] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2592] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2612] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2620] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2631] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2633] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2640] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2646] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2647] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2653] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 10:55:28 compute-0 NetworkManager[1036]: <info>  [1760007328.2658] manager: startup complete
Oct  9 10:55:28 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct  9 10:55:28 compute-0 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 10:55:28 compute-0 systemd[1]: Starting Authorization Manager...
Oct  9 10:55:28 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 10:55:28 compute-0 polkitd[1190]: Started polkitd version 0.117
Oct  9 10:55:28 compute-0 systemd[1]: Started Authorization Manager.
Oct  9 10:55:28 compute-0 cloud-init[1280]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 10:55:28 +0000. Up 8.20 seconds.
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   br-ex    |  True | 192.168.122.100 | 255.255.255.0 | global | fa:16:3e:fa:2e:f6 |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |    eth0    |  True |   38.102.83.12  | 255.255.255.0 | global | fa:16:3e:01:5c:d3 |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:fa:2e:f6 |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: | ovs-system | False |        .        |       .       |   .    | de:93:7f:3d:d0:15 |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   vlan20   |  True |   172.17.0.101  | 255.255.255.0 | global | 76:7c:1e:02:d2:1a |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   vlan21   |  True |   172.18.0.101  | 255.255.255.0 | global | 82:ad:59:2e:00:11 |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   vlan22   |  True |   172.19.0.101  | 255.255.255.0 | global | fa:5e:1e:fe:2b:77 |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   vlan23   |  True |   172.20.0.101  | 255.255.255.0 | global | c2:1c:5d:ef:5b:4b |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   3   |    172.17.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   4   |    172.18.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   5   |    172.19.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   6   |    172.20.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   7   |  192.168.122.0  |    0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct  9 10:55:28 compute-0 cloud-init[1280]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 10:55:28 compute-0 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Cloud-config availability.
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Network is Online.
Oct  9 10:55:28 compute-0 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 10:55:28 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 10:55:28 compute-0 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 10:55:28 compute-0 systemd[1]: Starting System Logging Service...
Oct  9 10:55:28 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct  9 10:55:28 compute-0 sm-notify[1314]: Version 2.5.4 starting
Oct  9 10:55:28 compute-0 systemd[1]: Starting Permit User Sessions...
Oct  9 10:55:28 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 10:55:28 compute-0 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 10:55:28 compute-0 systemd[1]: Finished Permit User Sessions.
Oct  9 10:55:28 compute-0 systemd[1]: Started Command Scheduler.
Oct  9 10:55:28 compute-0 systemd[1]: Started Getty on tty1.
Oct  9 10:55:28 compute-0 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Login Prompts.
Oct  9 10:55:28 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct  9 10:55:28 compute-0 rsyslogd[1315]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1315" x-info="https://www.rsyslog.com"] start
Oct  9 10:55:28 compute-0 systemd[1]: Started System Logging Service.
Oct  9 10:55:28 compute-0 systemd[1]: Reached target Multi-User System.
Oct  9 10:55:29 compute-0 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 10:55:29 compute-0 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 10:55:29 compute-0 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 10:55:29 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:55:29 compute-0 cloud-init[1327]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 10:55:29 +0000. Up 8.84 seconds.
Oct  9 10:55:29 compute-0 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 10:55:29 compute-0 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 10:55:29 compute-0 cloud-init[1331]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 10:55:29 +0000. Up 9.20 seconds.
Oct  9 10:55:29 compute-0 cloud-init[1331]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 10:55:29 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.27 seconds
Oct  9 10:55:29 compute-0 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 10:55:29 compute-0 systemd[1]: Reached target Cloud-init target.
Oct  9 10:55:29 compute-0 systemd[1]: Startup finished in 1.543s (kernel) + 2.414s (initrd) + 5.370s (userspace) = 9.328s.
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 35 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 35 affinity is now unmanaged
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 33 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 33 affinity is now unmanaged
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 31 affinity is now unmanaged
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 28 affinity is now unmanaged
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 32 affinity is now unmanaged
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 30 affinity is now unmanaged
Oct  9 10:55:36 compute-0 irqbalance[842]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  9 10:55:36 compute-0 irqbalance[842]: IRQ 29 affinity is now unmanaged
Oct  9 10:55:38 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 10:55:58 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 10:56:14 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:56:14 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:56:14 compute-0 systemd-logind[846]: New session 1 of user zuul.
Oct  9 10:56:14 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:56:14 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:56:14 compute-0 systemd[1341]: Queued start job for default target Main User Target.
Oct  9 10:56:14 compute-0 systemd[1341]: Created slice User Application Slice.
Oct  9 10:56:14 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:56:14 compute-0 systemd[1341]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:56:14 compute-0 systemd[1341]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:56:14 compute-0 systemd[1341]: Reached target Paths.
Oct  9 10:56:14 compute-0 systemd[1341]: Reached target Timers.
Oct  9 10:56:14 compute-0 systemd[1341]: Starting D-Bus User Message Bus Socket...
Oct  9 10:56:14 compute-0 systemd[1341]: Starting Create User's Volatile Files and Directories...
Oct  9 10:56:14 compute-0 systemd[1341]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:56:14 compute-0 systemd[1341]: Reached target Sockets.
Oct  9 10:56:14 compute-0 systemd[1341]: Finished Create User's Volatile Files and Directories.
Oct  9 10:56:14 compute-0 systemd[1341]: Reached target Basic System.
Oct  9 10:56:14 compute-0 systemd[1341]: Reached target Main User Target.
Oct  9 10:56:14 compute-0 systemd[1341]: Startup finished in 115ms.
Oct  9 10:56:14 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:56:14 compute-0 systemd[1]: Started Session 1 of User zuul.
Oct  9 10:56:15 compute-0 python3.9[1567]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:56:16 compute-0 systemd[1]: session-1.scope: Deactivated successfully.
Oct  9 10:56:16 compute-0 systemd-logind[846]: Session 1 logged out. Waiting for processes to exit.
Oct  9 10:56:16 compute-0 systemd-logind[846]: Removed session 1.
Oct  9 10:56:24 compute-0 systemd-logind[846]: New session 3 of user zuul.
Oct  9 10:56:24 compute-0 systemd[1]: Started Session 3 of User zuul.
Oct  9 10:56:31 compute-0 python3[2333]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 10:56:33 compute-0 python3[2428]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 10:56:35 compute-0 python3[2455]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 10:56:36 compute-0 python3[2481]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:56:36 compute-0 kernel: loop: module loaded
Oct  9 10:56:36 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct  9 10:56:36 compute-0 python3[2516]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:56:36 compute-0 lvm[2519]: PV /dev/loop3 not used.
Oct  9 10:56:36 compute-0 lvm[2528]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:56:36 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  9 10:56:36 compute-0 lvm[2530]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  9 10:56:36 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  9 10:56:37 compute-0 python3[2608]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 10:56:37 compute-0 python3[2681]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007395.7242901-33453-23126080055992/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:56:38 compute-0 python3[2731]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 10:56:38 compute-0 systemd[1]: Reloading.
Oct  9 10:56:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:56:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:56:38 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct  9 10:56:38 compute-0 bash[2771]: /dev/loop3: [64513]:4194934 (/var/lib/ceph-osd-0.img)
Oct  9 10:56:38 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct  9 10:56:38 compute-0 lvm[2772]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:56:38 compute-0 lvm[2772]: VG ceph_vg0 finished
Oct  9 10:56:41 compute-0 python3[2796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 10:56:43 compute-0 python3[2889]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 10:56:45 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct  9 10:56:45 compute-0 systemd[1]: Started PackageKit Daemon.
Oct  9 10:56:46 compute-0 python3[2954]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 10:56:49 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 10:56:49 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct  9 10:56:49 compute-0 python3[3008]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 10:56:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 10:56:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct  9 10:56:50 compute-0 systemd[1]: run-re4dfc29229dd46d9a4500ec58c96a301.service: Deactivated successfully.
Oct  9 10:56:50 compute-0 python3[3100]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1358300770-merged.mount: Deactivated successfully.
Oct  9 10:56:50 compute-0 kernel: evm: overlay not supported
Oct  9 10:56:50 compute-0 podman[3103]: 2025-10-09 10:56:50.734576149 +0000 UTC m=+0.088357971 system refresh
Oct  9 10:56:51 compute-0 python3[3168]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:56:51 compute-0 python3[3194]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:56:52 compute-0 python3[3272]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 10:56:52 compute-0 python3[3345]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007410.7788424-33645-52253394705690/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:56:53 compute-0 python3[3447]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 10:56:53 compute-0 python3[3520]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007411.8385174-33663-101100834274983/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:56:54 compute-0 python3[3570]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 10:56:54 compute-0 python3[3598]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 10:56:54 compute-0 python3[3626]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 10:56:54 compute-0 python3[3654]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid e990987d-9393-5e96-99ae-9e3a3319f191 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:56:55 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 10:56:55 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 10:56:55 compute-0 systemd-logind[846]: New session 4 of user ceph-admin.
Oct  9 10:56:55 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 10:56:55 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  9 10:56:55 compute-0 systemd[3662]: Queued start job for default target Main User Target.
Oct  9 10:56:55 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:56:55 compute-0 systemd[3662]: Created slice User Application Slice.
Oct  9 10:56:55 compute-0 systemd[3662]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:56:55 compute-0 systemd[3662]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:56:55 compute-0 systemd[3662]: Reached target Paths.
Oct  9 10:56:55 compute-0 systemd[3662]: Reached target Timers.
Oct  9 10:56:55 compute-0 systemd[3662]: Starting D-Bus User Message Bus Socket...
Oct  9 10:56:55 compute-0 systemd[3662]: Starting Create User's Volatile Files and Directories...
Oct  9 10:56:55 compute-0 systemd[3662]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:56:55 compute-0 systemd[3662]: Reached target Sockets.
Oct  9 10:56:55 compute-0 systemd[3662]: Finished Create User's Volatile Files and Directories.
Oct  9 10:56:55 compute-0 systemd[3662]: Reached target Basic System.
Oct  9 10:56:55 compute-0 systemd[3662]: Reached target Main User Target.
Oct  9 10:56:55 compute-0 systemd[3662]: Startup finished in 108ms.
Oct  9 10:56:55 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  9 10:56:55 compute-0 systemd[1]: Started Session 4 of User ceph-admin.
Oct  9 10:56:55 compute-0 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 10:56:55 compute-0 systemd-logind[846]: Session 4 logged out. Waiting for processes to exit.
Oct  9 10:56:55 compute-0 systemd-logind[846]: Removed session 4.
Oct  9 10:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:56:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3032207204-lower\x2dmapped.mount: Deactivated successfully.
Oct  9 10:57:05 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct  9 10:57:05 compute-0 systemd[3662]: Activating special unit Exit the Session...
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped target Main User Target.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped target Basic System.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped target Paths.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped target Sockets.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped target Timers.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 10:57:05 compute-0 systemd[3662]: Closed D-Bus User Message Bus Socket.
Oct  9 10:57:05 compute-0 systemd[3662]: Stopped Create User's Volatile Files and Directories.
Oct  9 10:57:05 compute-0 systemd[3662]: Removed slice User Application Slice.
Oct  9 10:57:05 compute-0 systemd[3662]: Reached target Shutdown.
Oct  9 10:57:05 compute-0 systemd[3662]: Finished Exit the Session.
Oct  9 10:57:05 compute-0 systemd[3662]: Reached target Exit the Session.
Oct  9 10:57:05 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct  9 10:57:05 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct  9 10:57:05 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  9 10:57:05 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  9 10:57:05 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  9 10:57:05 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  9 10:57:05 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct  9 10:57:13 compute-0 podman[3756]: 2025-10-09 10:57:13.26713616 +0000 UTC m=+17.567341168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.390651846 +0000 UTC m=+0.096157961 container create 7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0 (image=quay.io/ceph/ceph:v19, name=magical_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2856133033-merged.mount: Deactivated successfully.
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.318831397 +0000 UTC m=+0.024337522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:13 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  9 10:57:13 compute-0 systemd[1]: Started libpod-conmon-7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0.scope.
Oct  9 10:57:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.529188464 +0000 UTC m=+0.234694589 container init 7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0 (image=quay.io/ceph/ceph:v19, name=magical_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.53749044 +0000 UTC m=+0.242996525 container start 7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0 (image=quay.io/ceph/ceph:v19, name=magical_shaw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.556793928 +0000 UTC m=+0.262300033 container attach 7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0 (image=quay.io/ceph/ceph:v19, name=magical_shaw, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 10:57:13 compute-0 magical_shaw[3831]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct  9 10:57:13 compute-0 systemd[1]: libpod-7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0.scope: Deactivated successfully.
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.636924824 +0000 UTC m=+0.342430929 container died 7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0 (image=quay.io/ceph/ceph:v19, name=magical_shaw, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:13 compute-0 podman[3815]: 2025-10-09 10:57:13.824249703 +0000 UTC m=+0.529755788 container remove 7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0 (image=quay.io/ceph/ceph:v19, name=magical_shaw, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:57:13 compute-0 systemd[1]: libpod-conmon-7110e1342839b142b8481b998d1f11e39204e762f4d33e590be924588de491b0.scope: Deactivated successfully.
Oct  9 10:57:13 compute-0 podman[3851]: 2025-10-09 10:57:13.89688271 +0000 UTC m=+0.052008677 container create e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d (image=quay.io/ceph/ceph:v19, name=interesting_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:13 compute-0 systemd[1]: Started libpod-conmon-e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d.scope.
Oct  9 10:57:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:13 compute-0 podman[3851]: 2025-10-09 10:57:13.866017081 +0000 UTC m=+0.021143078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:13 compute-0 podman[3851]: 2025-10-09 10:57:13.978463953 +0000 UTC m=+0.133589940 container init e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d (image=quay.io/ceph/ceph:v19, name=interesting_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 10:57:13 compute-0 podman[3851]: 2025-10-09 10:57:13.984556948 +0000 UTC m=+0.139682915 container start e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d (image=quay.io/ceph/ceph:v19, name=interesting_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 10:57:13 compute-0 interesting_wescoff[3868]: 167 167
Oct  9 10:57:13 compute-0 systemd[1]: libpod-e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d.scope: Deactivated successfully.
Oct  9 10:57:13 compute-0 conmon[3868]: conmon e9b0242f6eb5bd1043a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d.scope/container/memory.events
Oct  9 10:57:13 compute-0 podman[3851]: 2025-10-09 10:57:13.998114942 +0000 UTC m=+0.153240939 container attach e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d (image=quay.io/ceph/ceph:v19, name=interesting_wescoff, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  9 10:57:13 compute-0 podman[3851]: 2025-10-09 10:57:13.99866526 +0000 UTC m=+0.153791227 container died e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d (image=quay.io/ceph/ceph:v19, name=interesting_wescoff, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:14 compute-0 podman[3851]: 2025-10-09 10:57:14.04112762 +0000 UTC m=+0.196253697 container remove e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d (image=quay.io/ceph/ceph:v19, name=interesting_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 10:57:14 compute-0 systemd[1]: libpod-conmon-e9b0242f6eb5bd1043a89ba3e1ad15e43fcc2415e8192876caeaf0f990802a3d.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.105845213 +0000 UTC m=+0.042520683 container create 21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d (image=quay.io/ceph/ceph:v19, name=gifted_maxwell, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 10:57:14 compute-0 systemd[1]: Started libpod-conmon-21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d.scope.
Oct  9 10:57:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.163610782 +0000 UTC m=+0.100286292 container init 21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d (image=quay.io/ceph/ceph:v19, name=gifted_maxwell, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.169565583 +0000 UTC m=+0.106241053 container start 21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d (image=quay.io/ceph/ceph:v19, name=gifted_maxwell, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.174387377 +0000 UTC m=+0.111062847 container attach 21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d (image=quay.io/ceph/ceph:v19, name=gifted_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.088864109 +0000 UTC m=+0.025539599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:14 compute-0 gifted_maxwell[3905]: AQAKledow6WTCxAAiVj1wUgxjo632Uk26MfcGQ==
Oct  9 10:57:14 compute-0 systemd[1]: libpod-21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.198447558 +0000 UTC m=+0.135123038 container died 21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d (image=quay.io/ceph/ceph:v19, name=gifted_maxwell, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:14 compute-0 podman[3888]: 2025-10-09 10:57:14.243127379 +0000 UTC m=+0.179802849 container remove 21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d (image=quay.io/ceph/ceph:v19, name=gifted_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:14 compute-0 systemd[1]: libpod-conmon-21b820c96deca716dca538555bba3c614f367c7638f006587c88f5b1dc68771d.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.302269473 +0000 UTC m=+0.039890118 container create 4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb (image=quay.io/ceph/ceph:v19, name=jovial_kirch, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-48de64504dd756bd713352f388f4dd544dd23eceb1c2d61f15486d6a649a856f-merged.mount: Deactivated successfully.
Oct  9 10:57:14 compute-0 systemd[1]: Started libpod-conmon-4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb.scope.
Oct  9 10:57:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.372961847 +0000 UTC m=+0.110582512 container init 4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb (image=quay.io/ceph/ceph:v19, name=jovial_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.377952877 +0000 UTC m=+0.115573522 container start 4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb (image=quay.io/ceph/ceph:v19, name=jovial_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.285484035 +0000 UTC m=+0.023104700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.381696257 +0000 UTC m=+0.119316902 container attach 4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb (image=quay.io/ceph/ceph:v19, name=jovial_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:57:14 compute-0 jovial_kirch[3941]: AQAKledogLyXFxAAlRGwz4Am6Nv3jMKDPOQ58w==
Oct  9 10:57:14 compute-0 systemd[1]: libpod-4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.399620352 +0000 UTC m=+0.137240987 container died 4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb (image=quay.io/ceph/ceph:v19, name=jovial_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 10:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ad8e4b66ec0559c37adf4bffb6b24ff7640109244bd7c957711591feb8c76a2-merged.mount: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3925]: 2025-10-09 10:57:14.473859409 +0000 UTC m=+0.211480054 container remove 4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb (image=quay.io/ceph/ceph:v19, name=jovial_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:14 compute-0 systemd[1]: libpod-conmon-4deccf52352d3ca67f9e6f7a779a6703e4c8b2269f11514c892cbe95a67f7ebb.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.525529154 +0000 UTC m=+0.034485376 container create 48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6 (image=quay.io/ceph/ceph:v19, name=peaceful_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 10:57:14 compute-0 systemd[1]: Started libpod-conmon-48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6.scope.
Oct  9 10:57:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.587792557 +0000 UTC m=+0.096748779 container init 48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6 (image=quay.io/ceph/ceph:v19, name=peaceful_spence, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.593964955 +0000 UTC m=+0.102921177 container start 48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6 (image=quay.io/ceph/ceph:v19, name=peaceful_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.596886749 +0000 UTC m=+0.105842971 container attach 48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6 (image=quay.io/ceph/ceph:v19, name=peaceful_spence, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.509696756 +0000 UTC m=+0.018652998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:14 compute-0 peaceful_spence[3977]: AQAKledot2p0JBAAX3cn8MvY2HmKQ/C3AimlGQ==
Oct  9 10:57:14 compute-0 systemd[1]: libpod-48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.614530524 +0000 UTC m=+0.123486746 container died 48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6 (image=quay.io/ceph/ceph:v19, name=peaceful_spence, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 10:57:14 compute-0 podman[3960]: 2025-10-09 10:57:14.648844344 +0000 UTC m=+0.157800566 container remove 48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6 (image=quay.io/ceph/ceph:v19, name=peaceful_spence, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 10:57:14 compute-0 systemd[1]: libpod-conmon-48092e2894d2537e95d294bad5c812f1a2397be7926f23fc80ea61c28f65e6b6.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.70246232 +0000 UTC m=+0.035711154 container create fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5 (image=quay.io/ceph/ceph:v19, name=upbeat_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:14 compute-0 systemd[1]: Started libpod-conmon-fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5.scope.
Oct  9 10:57:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c997607b0a383bc210c25f3cd86dbed67b7d83b769438e4e58d9f5aee68f3ea4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.767016008 +0000 UTC m=+0.100264872 container init fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5 (image=quay.io/ceph/ceph:v19, name=upbeat_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.771482391 +0000 UTC m=+0.104731225 container start fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5 (image=quay.io/ceph/ceph:v19, name=upbeat_tesla, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.774597761 +0000 UTC m=+0.107846585 container attach fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5 (image=quay.io/ceph/ceph:v19, name=upbeat_tesla, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.687601715 +0000 UTC m=+0.020850569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:14 compute-0 upbeat_tesla[4013]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  9 10:57:14 compute-0 upbeat_tesla[4013]: setting min_mon_release = quincy
Oct  9 10:57:14 compute-0 upbeat_tesla[4013]: /usr/bin/monmaptool: set fsid to e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:14 compute-0 upbeat_tesla[4013]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  9 10:57:14 compute-0 systemd[1]: libpod-fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.80050707 +0000 UTC m=+0.133755904 container died fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5 (image=quay.io/ceph/ceph:v19, name=upbeat_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 10:57:14 compute-0 podman[3996]: 2025-10-09 10:57:14.832102602 +0000 UTC m=+0.165351436 container remove fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5 (image=quay.io/ceph/ceph:v19, name=upbeat_tesla, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:57:14 compute-0 systemd[1]: libpod-conmon-fdb2bdbb0a3f906d27e101851a47b1cf773f30bf8e82d1ba8ab69668ddf572c5.scope: Deactivated successfully.
Oct  9 10:57:14 compute-0 podman[4031]: 2025-10-09 10:57:14.894741879 +0000 UTC m=+0.039522947 container create 528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be (image=quay.io/ceph/ceph:v19, name=kind_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 10:57:14 compute-0 systemd[1]: Started libpod-conmon-528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be.scope.
Oct  9 10:57:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/482c71ece98960da40ab3f5b40b17e837e142779bc7377fc722d45bdd3310687/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/482c71ece98960da40ab3f5b40b17e837e142779bc7377fc722d45bdd3310687/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/482c71ece98960da40ab3f5b40b17e837e142779bc7377fc722d45bdd3310687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/482c71ece98960da40ab3f5b40b17e837e142779bc7377fc722d45bdd3310687/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:14 compute-0 podman[4031]: 2025-10-09 10:57:14.952665814 +0000 UTC m=+0.097446862 container init 528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be (image=quay.io/ceph/ceph:v19, name=kind_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:14 compute-0 podman[4031]: 2025-10-09 10:57:14.957388865 +0000 UTC m=+0.102169913 container start 528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be (image=quay.io/ceph/ceph:v19, name=kind_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 10:57:14 compute-0 podman[4031]: 2025-10-09 10:57:14.963992227 +0000 UTC m=+0.108773295 container attach 528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be (image=quay.io/ceph/ceph:v19, name=kind_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:57:14 compute-0 podman[4031]: 2025-10-09 10:57:14.880175382 +0000 UTC m=+0.024956440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:15 compute-0 systemd[1]: libpod-528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be.scope: Deactivated successfully.
Oct  9 10:57:15 compute-0 podman[4031]: 2025-10-09 10:57:15.031686425 +0000 UTC m=+0.176467473 container died 528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be (image=quay.io/ceph/ceph:v19, name=kind_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:15 compute-0 podman[4031]: 2025-10-09 10:57:15.062280905 +0000 UTC m=+0.207061953 container remove 528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be (image=quay.io/ceph/ceph:v19, name=kind_cartwright, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:57:15 compute-0 systemd[1]: libpod-conmon-528ba6eb8da9c66e941b00b9358938316187205f8661691504af313b29e247be.scope: Deactivated successfully.
Oct  9 10:57:15 compute-0 systemd[1]: Reloading.
Oct  9 10:57:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:15 compute-0 systemd[1]: Reloading.
Oct  9 10:57:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:15 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct  9 10:57:15 compute-0 systemd[1]: Reloading.
Oct  9 10:57:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:15 compute-0 systemd[1]: Reached target Ceph cluster e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:15 compute-0 systemd[1]: Reloading.
Oct  9 10:57:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:16 compute-0 systemd[1]: Reloading.
Oct  9 10:57:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:16 compute-0 systemd[1]: Created slice Slice /system/ceph-e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:16 compute-0 systemd[1]: Reached target System Time Set.
Oct  9 10:57:16 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct  9 10:57:16 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:16 compute-0 podman[4329]: 2025-10-09 10:57:16.541566323 +0000 UTC m=+0.039811466 container create 16047ed0c7918f9bb8a8818f2d1de7b4416649b22def12905492bc4c36c0f3de (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d6d2d9e49dd5e88830f78062c32af71de4e606f6fc982461d9d6016dbff69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d6d2d9e49dd5e88830f78062c32af71de4e606f6fc982461d9d6016dbff69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d6d2d9e49dd5e88830f78062c32af71de4e606f6fc982461d9d6016dbff69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d6d2d9e49dd5e88830f78062c32af71de4e606f6fc982461d9d6016dbff69/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 podman[4329]: 2025-10-09 10:57:16.615230921 +0000 UTC m=+0.113476084 container init 16047ed0c7918f9bb8a8818f2d1de7b4416649b22def12905492bc4c36c0f3de (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:16 compute-0 podman[4329]: 2025-10-09 10:57:16.521552922 +0000 UTC m=+0.019798085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:16 compute-0 podman[4329]: 2025-10-09 10:57:16.622078731 +0000 UTC m=+0.120323874 container start 16047ed0c7918f9bb8a8818f2d1de7b4416649b22def12905492bc4c36c0f3de (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 10:57:16 compute-0 bash[4329]: 16047ed0c7918f9bb8a8818f2d1de7b4416649b22def12905492bc4c36c0f3de
Oct  9 10:57:16 compute-0 systemd[1]: Started Ceph mon.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:16 compute-0 ceph-mon[4348]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: pidfile_write: ignore empty --pid-file
Oct  9 10:57:16 compute-0 ceph-mon[4348]: load: jerasure load: lrc 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: RocksDB version: 7.9.2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Git sha 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: DB SUMMARY
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: DB Session ID:  KM9642OEFA35ZOBEPHM3
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: CURRENT file:  CURRENT
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                         Options.error_if_exists: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.create_if_missing: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                                     Options.env: 0x5561c6649c20
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                                Options.info_log: 0x5561c7e00d60
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                              Options.statistics: (nil)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                               Options.use_fsync: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                              Options.db_log_dir: 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                                 Options.wal_dir: 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                    Options.write_buffer_manager: 0x5561c7e05900
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.unordered_write: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                               Options.row_cache: None
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                              Options.wal_filter: None
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.two_write_queues: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.wal_compression: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.atomic_flush: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.max_background_jobs: 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.max_background_compactions: -1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.max_subcompactions: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.max_total_wal_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                          Options.max_open_files: -1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:       Options.compaction_readahead_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Compression algorithms supported:
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kZSTD supported: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kXpressCompression supported: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kZlibCompression supported: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:           Options.merge_operator: 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:        Options.compaction_filter: None
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5561c7e00500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5561c7e25350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:        Options.write_buffer_size: 33554432
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:  Options.max_write_buffer_number: 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.compression: NoCompression
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.num_levels: 7
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 16685083-1a78-43b3-bfd2-221d12c7d9cc
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007436677220, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007436688657, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007436, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "16685083-1a78-43b3-bfd2-221d12c7d9cc", "db_session_id": "KM9642OEFA35ZOBEPHM3", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007436688836, "job": 1, "event": "recovery_finished"}
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5561c7e26e00
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: DB pointer 0x5561c7f30000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:57:16 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5561c7e25350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 10:57:16 compute-0 ceph-mon[4348]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@-1(???) e0 preinit fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  9 10:57:16 compute-0 ceph-mon[4348]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  9 10:57:16 compute-0 podman[4349]: 2025-10-09 10:57:16.73192976 +0000 UTC m=+0.063693512 container create 45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  9 10:57:16 compute-0 ceph-mon[4348]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T10:57:14.796633+0000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : created 2025-10-09T10:57:14.796633+0000
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,os=Linux}
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).mds e1 new map
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-10-09T10:57:16:742582+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mkfs e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  9 10:57:16 compute-0 ceph-mon[4348]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  9 10:57:16 compute-0 ceph-mon[4348]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 10:57:16 compute-0 systemd[1]: Started libpod-conmon-45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c.scope.
Oct  9 10:57:16 compute-0 podman[4349]: 2025-10-09 10:57:16.702897949 +0000 UTC m=+0.034661721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:16 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12629e6e32f727eaedd50e4b9ed0b1a79196b4846d382dd2d5f083e0ec23118a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12629e6e32f727eaedd50e4b9ed0b1a79196b4846d382dd2d5f083e0ec23118a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12629e6e32f727eaedd50e4b9ed0b1a79196b4846d382dd2d5f083e0ec23118a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:16 compute-0 podman[4349]: 2025-10-09 10:57:16.819550105 +0000 UTC m=+0.151313857 container init 45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 10:57:16 compute-0 podman[4349]: 2025-10-09 10:57:16.829234566 +0000 UTC m=+0.160998298 container start 45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:57:16 compute-0 podman[4349]: 2025-10-09 10:57:16.832958444 +0000 UTC m=+0.164722196 container attach 45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:17 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:57:17 compute-0 ceph-mon[4348]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709086033' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:  cluster:
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    id:     e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    health: HEALTH_OK
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]: 
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:  services:
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    mon: 1 daemons, quorum compute-0 (age 0.278421s)
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    mgr: no daemons active
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    osd: 0 osds: 0 up, 0 in
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]: 
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:  data:
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    pools:   0 pools, 0 pgs
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    objects: 0 objects, 0 B
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    usage:   0 B used, 0 B / 0 B avail
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]:    pgs:     
Oct  9 10:57:17 compute-0 dazzling_davinci[4403]: 
Oct  9 10:57:17 compute-0 systemd[1]: libpod-45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c.scope: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4429]: 2025-10-09 10:57:17.073825659 +0000 UTC m=+0.024070452 container died 45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 10:57:17 compute-0 podman[4429]: 2025-10-09 10:57:17.111515856 +0000 UTC m=+0.061760629 container remove 45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c (image=quay.io/ceph/ceph:v19, name=dazzling_davinci, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:17 compute-0 systemd[1]: libpod-conmon-45a24dd32fd820113233b92c14fe58a263401d4fa144b04555b054f46b46347c.scope: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.171414175 +0000 UTC m=+0.034515597 container create 665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517 (image=quay.io/ceph/ceph:v19, name=mystifying_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:17 compute-0 systemd[1]: Started libpod-conmon-665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517.scope.
Oct  9 10:57:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b85060cf4f91ec504359cfc0819d23f040b4bb337c533d24fd59e32675b3b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b85060cf4f91ec504359cfc0819d23f040b4bb337c533d24fd59e32675b3b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b85060cf4f91ec504359cfc0819d23f040b4bb337c533d24fd59e32675b3b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b85060cf4f91ec504359cfc0819d23f040b4bb337c533d24fd59e32675b3b6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.227052427 +0000 UTC m=+0.090153879 container init 665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517 (image=quay.io/ceph/ceph:v19, name=mystifying_pike, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.232572484 +0000 UTC m=+0.095673906 container start 665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517 (image=quay.io/ceph/ceph:v19, name=mystifying_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.235835268 +0000 UTC m=+0.098936710 container attach 665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517 (image=quay.io/ceph/ceph:v19, name=mystifying_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.15627844 +0000 UTC m=+0.019379882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:17 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  9 10:57:17 compute-0 ceph-mon[4348]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/939276554' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 10:57:17 compute-0 ceph-mon[4348]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/939276554' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 10:57:17 compute-0 mystifying_pike[4461]: 
Oct  9 10:57:17 compute-0 mystifying_pike[4461]: [global]
Oct  9 10:57:17 compute-0 mystifying_pike[4461]: #011fsid = e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:17 compute-0 mystifying_pike[4461]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  9 10:57:17 compute-0 systemd[1]: libpod-665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517.scope: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.422175786 +0000 UTC m=+0.285277208 container died 665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517 (image=quay.io/ceph/ceph:v19, name=mystifying_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Oct  9 10:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b85060cf4f91ec504359cfc0819d23f040b4bb337c533d24fd59e32675b3b6-merged.mount: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4444]: 2025-10-09 10:57:17.460596406 +0000 UTC m=+0.323697828 container remove 665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517 (image=quay.io/ceph/ceph:v19, name=mystifying_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Oct  9 10:57:17 compute-0 systemd[1]: libpod-conmon-665f108639a516fdeeb53eac9dcd2b9068438d34db576941bbd534e909506517.scope: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.512010493 +0000 UTC m=+0.033921597 container create ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6 (image=quay.io/ceph/ceph:v19, name=hardcore_fermi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:17 compute-0 systemd[1]: Started libpod-conmon-ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6.scope.
Oct  9 10:57:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df518a8b3b981e543e0c8263d032de2c89cc89b6f8954d128fd91885afe5d83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df518a8b3b981e543e0c8263d032de2c89cc89b6f8954d128fd91885afe5d83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df518a8b3b981e543e0c8263d032de2c89cc89b6f8954d128fd91885afe5d83/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df518a8b3b981e543e0c8263d032de2c89cc89b6f8954d128fd91885afe5d83/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.582336416 +0000 UTC m=+0.104247540 container init ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6 (image=quay.io/ceph/ceph:v19, name=hardcore_fermi, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.588773481 +0000 UTC m=+0.110684585 container start ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6 (image=quay.io/ceph/ceph:v19, name=hardcore_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.591918212 +0000 UTC m=+0.113829336 container attach ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6 (image=quay.io/ceph/ceph:v19, name=hardcore_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.497963923 +0000 UTC m=+0.019875027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:17 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:17 compute-0 ceph-mon[4348]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417433561' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:17 compute-0 systemd[1]: libpod-ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6.scope: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.771139112 +0000 UTC m=+0.293050206 container died ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6 (image=quay.io/ceph/ceph:v19, name=hardcore_fermi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 10:57:17 compute-0 ceph-mon[4348]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 10:57:17 compute-0 ceph-mon[4348]: from='client.? 192.168.122.100:0/939276554' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 10:57:17 compute-0 ceph-mon[4348]: from='client.? 192.168.122.100:0/939276554' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 10:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df518a8b3b981e543e0c8263d032de2c89cc89b6f8954d128fd91885afe5d83-merged.mount: Deactivated successfully.
Oct  9 10:57:17 compute-0 podman[4499]: 2025-10-09 10:57:17.81383244 +0000 UTC m=+0.335743544 container remove ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6 (image=quay.io/ceph/ceph:v19, name=hardcore_fermi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:57:17 compute-0 systemd[1]: libpod-conmon-ecddded990d258e2edbd70db358db84686c8210360537642b3b2b118e3ad00e6.scope: Deactivated successfully.
Oct  9 10:57:17 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:57:18 compute-0 ceph-mon[4348]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  9 10:57:18 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  9 10:57:18 compute-0 ceph-mon[4348]: mon.compute-0@0(leader) e1 shutdown
Oct  9 10:57:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0[4344]: 2025-10-09T10:57:18.019+0000 7fdea0d95640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  9 10:57:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0[4344]: 2025-10-09T10:57:18.019+0000 7fdea0d95640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  9 10:57:18 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  9 10:57:18 compute-0 ceph-mon[4348]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  9 10:57:18 compute-0 podman[4583]: 2025-10-09 10:57:18.162163156 +0000 UTC m=+0.180163541 container died 16047ed0c7918f9bb8a8818f2d1de7b4416649b22def12905492bc4c36c0f3de (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-294d6d2d9e49dd5e88830f78062c32af71de4e606f6fc982461d9d6016dbff69-merged.mount: Deactivated successfully.
Oct  9 10:57:18 compute-0 podman[4583]: 2025-10-09 10:57:18.208523841 +0000 UTC m=+0.226524236 container remove 16047ed0c7918f9bb8a8818f2d1de7b4416649b22def12905492bc4c36c0f3de (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:18 compute-0 bash[4583]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0
Oct  9 10:57:18 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@mon.compute-0.service: Deactivated successfully.
Oct  9 10:57:18 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:18 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 10:57:18 compute-0 podman[4684]: 2025-10-09 10:57:18.548345355 +0000 UTC m=+0.033091331 container create 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb02785ee0c848b30cbb6872e91b4d8ddbcc069e8f8f03b8f7031e8eacbc94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb02785ee0c848b30cbb6872e91b4d8ddbcc069e8f8f03b8f7031e8eacbc94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb02785ee0c848b30cbb6872e91b4d8ddbcc069e8f8f03b8f7031e8eacbc94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb02785ee0c848b30cbb6872e91b4d8ddbcc069e8f8f03b8f7031e8eacbc94/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 podman[4684]: 2025-10-09 10:57:18.601459966 +0000 UTC m=+0.086205942 container init 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:18 compute-0 podman[4684]: 2025-10-09 10:57:18.606405244 +0000 UTC m=+0.091151220 container start 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:57:18 compute-0 bash[4684]: 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78
Oct  9 10:57:18 compute-0 podman[4684]: 2025-10-09 10:57:18.534050547 +0000 UTC m=+0.018796543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:18 compute-0 systemd[1]: Started Ceph mon.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:18 compute-0 ceph-mon[4705]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: pidfile_write: ignore empty --pid-file
Oct  9 10:57:18 compute-0 ceph-mon[4705]: load: jerasure load: lrc 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: RocksDB version: 7.9.2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Git sha 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: DB SUMMARY
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: DB Session ID:  PFLMSQ4A6H5TNSVWO03K
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: CURRENT file:  CURRENT
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 59851 ; 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                         Options.error_if_exists: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.create_if_missing: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                                     Options.env: 0x557308397c20
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                                Options.info_log: 0x557309a93ac0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                              Options.statistics: (nil)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                               Options.use_fsync: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                              Options.db_log_dir: 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                                 Options.wal_dir: 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                    Options.write_buffer_manager: 0x557309a97900
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.unordered_write: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                               Options.row_cache: None
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                              Options.wal_filter: None
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.two_write_queues: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.wal_compression: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.atomic_flush: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.max_background_jobs: 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.max_background_compactions: -1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.max_subcompactions: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.max_total_wal_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                          Options.max_open_files: -1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:       Options.compaction_readahead_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Compression algorithms supported:
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kZSTD supported: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kXpressCompression supported: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kZlibCompression supported: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:           Options.merge_operator: 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:        Options.compaction_filter: None
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557309a92aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557309ab7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:        Options.write_buffer_size: 33554432
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:  Options.max_write_buffer_number: 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.compression: NoCompression
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.num_levels: 7
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 16685083-1a78-43b3-bfd2-221d12c7d9cc
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007438643945, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007438651565, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 58087, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3209, "raw_average_key_size": 30, "raw_value_size": 55570, "raw_average_value_size": 529, "num_data_blocks": 9, "num_entries": 105, "num_filter_entries": 105, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007438, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "16685083-1a78-43b3-bfd2-221d12c7d9cc", "db_session_id": "PFLMSQ4A6H5TNSVWO03K", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007438651675, "job": 1, "event": "recovery_finished"}
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557309ab8e00
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: DB pointer 0x557309bc2000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:57:18 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.12 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.8      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0   60.12 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.8      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.8      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.8      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.81 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.81 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557309ab7350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 10:57:18 compute-0 ceph-mon[4705]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???) e1 preinit fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).mds e1 new map
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-10-09T10:57:16:742582+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  9 10:57:18 compute-0 ceph-mon[4705]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  9 10:57:18 compute-0 podman[4706]: 2025-10-09 10:57:18.685245389 +0000 UTC m=+0.037746190 container create 33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f (image=quay.io/ceph/ceph:v19, name=musing_hugle, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T10:57:14.796633+0000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : created 2025-10-09T10:57:14.796633+0000
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  9 10:57:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  9 10:57:18 compute-0 systemd[1]: Started libpod-conmon-33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f.scope.
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 10:57:18 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f6f31190bc99c5a2101f1fb61b98ced69b2d14ce9aed1a9c8e08df20d2c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f6f31190bc99c5a2101f1fb61b98ced69b2d14ce9aed1a9c8e08df20d2c9a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f6f31190bc99c5a2101f1fb61b98ced69b2d14ce9aed1a9c8e08df20d2c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:18 compute-0 podman[4706]: 2025-10-09 10:57:18.669186355 +0000 UTC m=+0.021687156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:18 compute-0 podman[4706]: 2025-10-09 10:57:18.778736613 +0000 UTC m=+0.131237434 container init 33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f (image=quay.io/ceph/ceph:v19, name=musing_hugle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 10:57:18 compute-0 podman[4706]: 2025-10-09 10:57:18.786473691 +0000 UTC m=+0.138974522 container start 33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f (image=quay.io/ceph/ceph:v19, name=musing_hugle, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:18 compute-0 podman[4706]: 2025-10-09 10:57:18.804755346 +0000 UTC m=+0.157256177 container attach 33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f (image=quay.io/ceph/ceph:v19, name=musing_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 10:57:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct  9 10:57:19 compute-0 systemd[1]: libpod-33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f.scope: Deactivated successfully.
Oct  9 10:57:19 compute-0 podman[4706]: 2025-10-09 10:57:19.00002404 +0000 UTC m=+0.352524851 container died 33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f (image=quay.io/ceph/ceph:v19, name=musing_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 10:57:19 compute-0 podman[4706]: 2025-10-09 10:57:19.050699943 +0000 UTC m=+0.403200744 container remove 33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f (image=quay.io/ceph/ceph:v19, name=musing_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 10:57:19 compute-0 systemd[1]: libpod-conmon-33ac0ad59b0fbbce9ad368787e9a9c44e71565988516ad1399582b3b0f82e14f.scope: Deactivated successfully.
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.103480503 +0000 UTC m=+0.036177969 container create 940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b (image=quay.io/ceph/ceph:v19, name=nice_williams, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:57:19 compute-0 systemd[1]: Started libpod-conmon-940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b.scope.
Oct  9 10:57:19 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e854ca5f92f30081dfc3526719126d6437bab5578be0add288d355945fc36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e854ca5f92f30081dfc3526719126d6437bab5578be0add288d355945fc36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e854ca5f92f30081dfc3526719126d6437bab5578be0add288d355945fc36c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.166770271 +0000 UTC m=+0.099467757 container init 940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b (image=quay.io/ceph/ceph:v19, name=nice_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.172440363 +0000 UTC m=+0.105137829 container start 940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b (image=quay.io/ceph/ceph:v19, name=nice_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.176046637 +0000 UTC m=+0.108744103 container attach 940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b (image=quay.io/ceph/ceph:v19, name=nice_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.087871614 +0000 UTC m=+0.020569110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:19 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct  9 10:57:19 compute-0 systemd[1]: libpod-940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b.scope: Deactivated successfully.
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.373499112 +0000 UTC m=+0.306196588 container died 940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b (image=quay.io/ceph/ceph:v19, name=nice_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 10:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-48e854ca5f92f30081dfc3526719126d6437bab5578be0add288d355945fc36c-merged.mount: Deactivated successfully.
Oct  9 10:57:19 compute-0 podman[4798]: 2025-10-09 10:57:19.471302424 +0000 UTC m=+0.403999880 container remove 940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b (image=quay.io/ceph/ceph:v19, name=nice_williams, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:57:19 compute-0 systemd[1]: libpod-conmon-940ba60a9f9fd25daa83bcf9058d46cc781df930efc9779fa5b481ea0e0dbe4b.scope: Deactivated successfully.
Oct  9 10:57:19 compute-0 systemd[1]: Reloading.
Oct  9 10:57:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:19 compute-0 systemd[1]: Reloading.
Oct  9 10:57:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:19 compute-0 systemd[1]: Starting Ceph mgr.compute-0.izrudc for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:57:20 compute-0 podman[4978]: 2025-10-09 10:57:20.208572237 +0000 UTC m=+0.042913355 container create 00875a7cafe3d43e138d17efa6b6bf9637be179ad33540cb1f2b9fff6673fcf5 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7e76071a1e86ac49aa4ce1abb89771264db9c70ed5a407459911f13ac25f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7e76071a1e86ac49aa4ce1abb89771264db9c70ed5a407459911f13ac25f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7e76071a1e86ac49aa4ce1abb89771264db9c70ed5a407459911f13ac25f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7e76071a1e86ac49aa4ce1abb89771264db9c70ed5a407459911f13ac25f28/merged/var/lib/ceph/mgr/ceph-compute-0.izrudc supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 podman[4978]: 2025-10-09 10:57:20.281457122 +0000 UTC m=+0.115798260 container init 00875a7cafe3d43e138d17efa6b6bf9637be179ad33540cb1f2b9fff6673fcf5 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 10:57:20 compute-0 podman[4978]: 2025-10-09 10:57:20.186111568 +0000 UTC m=+0.020452646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:20 compute-0 podman[4978]: 2025-10-09 10:57:20.287179015 +0000 UTC m=+0.121520113 container start 00875a7cafe3d43e138d17efa6b6bf9637be179ad33540cb1f2b9fff6673fcf5 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:57:20 compute-0 bash[4978]: 00875a7cafe3d43e138d17efa6b6bf9637be179ad33540cb1f2b9fff6673fcf5
Oct  9 10:57:20 compute-0 systemd[1]: Started Ceph mgr.compute-0.izrudc for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: pidfile_write: ignore empty --pid-file
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'alerts'
Oct  9 10:57:20 compute-0 podman[5018]: 2025-10-09 10:57:20.429119381 +0000 UTC m=+0.048599818 container create 4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c (image=quay.io/ceph/ceph:v19, name=infallible_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'balancer'
Oct  9 10:57:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:20.459+0000 7f1d4c1b9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 10:57:20 compute-0 systemd[1]: Started libpod-conmon-4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c.scope.
Oct  9 10:57:20 compute-0 podman[5018]: 2025-10-09 10:57:20.40538716 +0000 UTC m=+0.024867627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44223e9b7a4d98443985652cf83f6c507f53f5f4a6c35c2d4b2ecc1590215b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44223e9b7a4d98443985652cf83f6c507f53f5f4a6c35c2d4b2ecc1590215b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44223e9b7a4d98443985652cf83f6c507f53f5f4a6c35c2d4b2ecc1590215b30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:20 compute-0 podman[5018]: 2025-10-09 10:57:20.535615132 +0000 UTC m=+0.155095599 container init 4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c (image=quay.io/ceph/ceph:v19, name=infallible_gates, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 10:57:20 compute-0 podman[5018]: 2025-10-09 10:57:20.543447772 +0000 UTC m=+0.162928209 container start 4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c (image=quay.io/ceph/ceph:v19, name=infallible_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:20 compute-0 podman[5018]: 2025-10-09 10:57:20.550448097 +0000 UTC m=+0.169928554 container attach 4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c (image=quay.io/ceph/ceph:v19, name=infallible_gates, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 10:57:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'cephadm'
Oct  9 10:57:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:20.552+0000 7f1d4c1b9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 10:57:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:57:20 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120068928' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:57:20 compute-0 infallible_gates[5034]: 
Oct  9 10:57:20 compute-0 infallible_gates[5034]: {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "health": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "status": "HEALTH_OK",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "checks": {},
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "mutes": []
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "election_epoch": 5,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "quorum": [
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        0
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    ],
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "quorum_names": [
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "compute-0"
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    ],
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "quorum_age": 2,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "monmap": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "epoch": 1,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "min_mon_release_name": "squid",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_mons": 1
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "osdmap": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "epoch": 1,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_osds": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_up_osds": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "osd_up_since": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_in_osds": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "osd_in_since": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_remapped_pgs": 0
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "pgmap": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "pgs_by_state": [],
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_pgs": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_pools": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_objects": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "data_bytes": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "bytes_used": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "bytes_avail": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "bytes_total": 0
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "fsmap": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "epoch": 1,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "btime": "2025-10-09T10:57:16:742582+0000",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "by_rank": [],
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "up:standby": 0
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "mgrmap": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "available": false,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "num_standbys": 0,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "modules": [
Oct  9 10:57:20 compute-0 infallible_gates[5034]:            "iostat",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:            "nfs",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:            "restful"
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        ],
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "services": {}
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "servicemap": {
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "epoch": 1,
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "modified": "2025-10-09T10:57:16.745535+0000",
Oct  9 10:57:20 compute-0 infallible_gates[5034]:        "services": {}
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    },
Oct  9 10:57:20 compute-0 infallible_gates[5034]:    "progress_events": {}
Oct  9 10:57:20 compute-0 infallible_gates[5034]: }
Oct  9 10:57:20 compute-0 systemd[1]: libpod-4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c.scope: Deactivated successfully.
Oct  9 10:57:20 compute-0 podman[5061]: 2025-10-09 10:57:20.801319401 +0000 UTC m=+0.033978599 container died 4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c (image=quay.io/ceph/ceph:v19, name=infallible_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 10:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-44223e9b7a4d98443985652cf83f6c507f53f5f4a6c35c2d4b2ecc1590215b30-merged.mount: Deactivated successfully.
Oct  9 10:57:20 compute-0 podman[5061]: 2025-10-09 10:57:20.841062745 +0000 UTC m=+0.073721933 container remove 4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c (image=quay.io/ceph/ceph:v19, name=infallible_gates, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:20 compute-0 systemd[1]: libpod-conmon-4175dbd309f2b8688a9cdb21da3e406a061c206de9aae22e12024cd07feca17c.scope: Deactivated successfully.
Oct  9 10:57:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'crash'
Oct  9 10:57:21 compute-0 ceph-mgr[4997]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 10:57:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'dashboard'
Oct  9 10:57:21 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:21.384+0000 7f1d4c1b9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 10:57:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'devicehealth'
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:22.059+0000 7f1d4c1b9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  from numpy import show_config as show_numpy_config
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'influx'
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:22.235+0000 7f1d4c1b9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'insights'
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:22.305+0000 7f1d4c1b9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'iostat'
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'k8sevents'
Oct  9 10:57:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:22.446+0000 7f1d4c1b9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'localpool'
Oct  9 10:57:22 compute-0 podman[5087]: 2025-10-09 10:57:22.912632232 +0000 UTC m=+0.040573161 container create a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06 (image=quay.io/ceph/ceph:v19, name=loving_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 10:57:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 10:57:22 compute-0 systemd[1]: Started libpod-conmon-a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06.scope.
Oct  9 10:57:22 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ba2a93f1e89471e58ad69841999f30622ee1ce4ab4387dc1df1fc7fab8aa8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ba2a93f1e89471e58ad69841999f30622ee1ce4ab4387dc1df1fc7fab8aa8e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ba2a93f1e89471e58ad69841999f30622ee1ce4ab4387dc1df1fc7fab8aa8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:22 compute-0 podman[5087]: 2025-10-09 10:57:22.986671992 +0000 UTC m=+0.114612931 container init a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06 (image=quay.io/ceph/ceph:v19, name=loving_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:22 compute-0 podman[5087]: 2025-10-09 10:57:22.894944555 +0000 UTC m=+0.022885504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:22 compute-0 podman[5087]: 2025-10-09 10:57:22.992310633 +0000 UTC m=+0.120251562 container start a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06 (image=quay.io/ceph/ceph:v19, name=loving_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:22 compute-0 podman[5087]: 2025-10-09 10:57:22.995914859 +0000 UTC m=+0.123855868 container attach a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06 (image=quay.io/ceph/ceph:v19, name=loving_curie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mirroring'
Oct  9 10:57:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:57:23 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038950301' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:57:23 compute-0 loving_curie[5104]: 
Oct  9 10:57:23 compute-0 loving_curie[5104]: {
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "health": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "status": "HEALTH_OK",
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "checks": {},
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "mutes": []
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "election_epoch": 5,
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "quorum": [
Oct  9 10:57:23 compute-0 loving_curie[5104]:        0
Oct  9 10:57:23 compute-0 loving_curie[5104]:    ],
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "quorum_names": [
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "compute-0"
Oct  9 10:57:23 compute-0 loving_curie[5104]:    ],
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "quorum_age": 4,
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "monmap": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "epoch": 1,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "min_mon_release_name": "squid",
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_mons": 1
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "osdmap": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "epoch": 1,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_osds": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_up_osds": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "osd_up_since": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_in_osds": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "osd_in_since": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_remapped_pgs": 0
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "pgmap": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "pgs_by_state": [],
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_pgs": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_pools": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_objects": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "data_bytes": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "bytes_used": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "bytes_avail": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "bytes_total": 0
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "fsmap": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "epoch": 1,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "btime": "2025-10-09T10:57:16:742582+0000",
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "by_rank": [],
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "up:standby": 0
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "mgrmap": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "available": false,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "num_standbys": 0,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "modules": [
Oct  9 10:57:23 compute-0 loving_curie[5104]:            "iostat",
Oct  9 10:57:23 compute-0 loving_curie[5104]:            "nfs",
Oct  9 10:57:23 compute-0 loving_curie[5104]:            "restful"
Oct  9 10:57:23 compute-0 loving_curie[5104]:        ],
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "services": {}
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "servicemap": {
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "epoch": 1,
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "modified": "2025-10-09T10:57:16.745535+0000",
Oct  9 10:57:23 compute-0 loving_curie[5104]:        "services": {}
Oct  9 10:57:23 compute-0 loving_curie[5104]:    },
Oct  9 10:57:23 compute-0 loving_curie[5104]:    "progress_events": {}
Oct  9 10:57:23 compute-0 loving_curie[5104]: }
Oct  9 10:57:23 compute-0 systemd[1]: libpod-a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06.scope: Deactivated successfully.
Oct  9 10:57:23 compute-0 podman[5087]: 2025-10-09 10:57:23.181172542 +0000 UTC m=+0.309113471 container died a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06 (image=quay.io/ceph/ceph:v19, name=loving_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-52ba2a93f1e89471e58ad69841999f30622ee1ce4ab4387dc1df1fc7fab8aa8e-merged.mount: Deactivated successfully.
Oct  9 10:57:23 compute-0 podman[5087]: 2025-10-09 10:57:23.217613809 +0000 UTC m=+0.345554738 container remove a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06 (image=quay.io/ceph/ceph:v19, name=loving_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:57:23 compute-0 systemd[1]: libpod-conmon-a57ae27bb74ac893e709b1e2ee6baf189791190eb81a0f7e1179edacd5a78a06.scope: Deactivated successfully.
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'nfs'
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'orchestrator'
Oct  9 10:57:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:23.507+0000 7f1d4c1b9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 10:57:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:23.733+0000 7f1d4c1b9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_support'
Oct  9 10:57:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:23.825+0000 7f1d4c1b9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:23.896+0000 7f1d4c1b9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 10:57:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'progress'
Oct  9 10:57:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:23.973+0000 7f1d4c1b9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'prometheus'
Oct  9 10:57:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:24.043+0000 7f1d4c1b9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:24.406+0000 7f1d4c1b9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rbd_support'
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'restful'
Oct  9 10:57:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:24.508+0000 7f1d4c1b9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rgw'
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 10:57:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rook'
Oct  9 10:57:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:24.957+0000 7f1d4c1b9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.285681994 +0000 UTC m=+0.042485132 container create 272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba (image=quay.io/ceph/ceph:v19, name=nervous_keller, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 10:57:25 compute-0 systemd[1]: Started libpod-conmon-272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba.scope.
Oct  9 10:57:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb67839e40ae1d620b8a993e098f9f24f2b31452f5528b292503fa6bebe986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb67839e40ae1d620b8a993e098f9f24f2b31452f5528b292503fa6bebe986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bb67839e40ae1d620b8a993e098f9f24f2b31452f5528b292503fa6bebe986/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.265899551 +0000 UTC m=+0.022702709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.364359574 +0000 UTC m=+0.121162712 container init 272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba (image=quay.io/ceph/ceph:v19, name=nervous_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.373027782 +0000 UTC m=+0.129830920 container start 272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba (image=quay.io/ceph/ceph:v19, name=nervous_keller, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.376239345 +0000 UTC m=+0.133042503 container attach 272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba (image=quay.io/ceph/ceph:v19, name=nervous_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:57:25 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381540241' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:57:25 compute-0 nervous_keller[5158]: 
Oct  9 10:57:25 compute-0 nervous_keller[5158]: {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "health": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "status": "HEALTH_OK",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "checks": {},
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "mutes": []
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "election_epoch": 5,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "quorum": [
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        0
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    ],
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "quorum_names": [
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "compute-0"
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    ],
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "quorum_age": 6,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "monmap": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "epoch": 1,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "min_mon_release_name": "squid",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_mons": 1
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "osdmap": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "epoch": 1,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_osds": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_up_osds": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "osd_up_since": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_in_osds": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "osd_in_since": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_remapped_pgs": 0
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "pgmap": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "pgs_by_state": [],
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_pgs": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_pools": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_objects": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "data_bytes": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "bytes_used": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "bytes_avail": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "bytes_total": 0
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "fsmap": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "epoch": 1,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "btime": "2025-10-09T10:57:16:742582+0000",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "by_rank": [],
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "up:standby": 0
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "mgrmap": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "available": false,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "num_standbys": 0,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "modules": [
Oct  9 10:57:25 compute-0 nervous_keller[5158]:            "iostat",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:            "nfs",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:            "restful"
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        ],
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "services": {}
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "servicemap": {
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "epoch": 1,
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "modified": "2025-10-09T10:57:16.745535+0000",
Oct  9 10:57:25 compute-0 nervous_keller[5158]:        "services": {}
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    },
Oct  9 10:57:25 compute-0 nervous_keller[5158]:    "progress_events": {}
Oct  9 10:57:25 compute-0 nervous_keller[5158]: }
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'selftest'
Oct  9 10:57:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:25.559+0000 7f1d4c1b9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 systemd[1]: libpod-272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba.scope: Deactivated successfully.
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.576172838 +0000 UTC m=+0.332975976 container died 272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba (image=quay.io/ceph/ceph:v19, name=nervous_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Oct  9 10:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9bb67839e40ae1d620b8a993e098f9f24f2b31452f5528b292503fa6bebe986-merged.mount: Deactivated successfully.
Oct  9 10:57:25 compute-0 podman[5142]: 2025-10-09 10:57:25.611173519 +0000 UTC m=+0.367976657 container remove 272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba (image=quay.io/ceph/ceph:v19, name=nervous_keller, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:25 compute-0 systemd[1]: libpod-conmon-272c0e18c1937eb2d5371058f427d8902395ca036d1f6c8214082617a8082cba.scope: Deactivated successfully.
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'snap_schedule'
Oct  9 10:57:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:25.641+0000 7f1d4c1b9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'stats'
Oct  9 10:57:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:25.739+0000 7f1d4c1b9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'status'
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telegraf'
Oct  9 10:57:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:25.902+0000 7f1d4c1b9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 10:57:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telemetry'
Oct  9 10:57:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:25.979+0000 7f1d4c1b9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 10:57:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:26.162+0000 7f1d4c1b9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'volumes'
Oct  9 10:57:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:26.387+0000 7f1d4c1b9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:26.652+0000 7f1d4c1b9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'zabbix'
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:26.721+0000 7f1d4c1b9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: ms_deliver_dispatch: unhandled message 0x55ae88a529c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izrudc
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr handle_mgr_map Activating!
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr handle_mgr_map I am now activating
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.izrudc(active, starting, since 0.0112579s)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: balancer
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: crash
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [balancer INFO root] Starting
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Manager daemon compute-0.izrudc is now available
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_10:57:26
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [balancer INFO root] do_upmap
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: devicehealth
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [balancer INFO root] No pools available
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Starting
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: iostat
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: nfs
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: orchestrator
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: pg_autoscaler
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: progress
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [progress INFO root] Loading...
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [progress INFO root] No stored events to load
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded [] historic events
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] recovery thread starting
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] starting setup
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: rbd_support
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: restful
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: status
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: telemetry
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [restful WARNING root] server not running: no certificate configured
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] PerfHandler: starting
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TaskHandler: starting
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"} v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: [rbd_support INFO root] setup complete
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct  9 10:57:26 compute-0 ceph-mon[4705]: Activating manager daemon compute-0.izrudc
Oct  9 10:57:26 compute-0 ceph-mon[4705]: Manager daemon compute-0.izrudc is now available
Oct  9 10:57:26 compute-0 ceph-mon[4705]: from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 10:57:26 compute-0 ceph-mon[4705]: from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct  9 10:57:26 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: volumes
Oct  9 10:57:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:27 compute-0 podman[5275]: 2025-10-09 10:57:27.683229042 +0000 UTC m=+0.042353147 container create 032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3 (image=quay.io/ceph/ceph:v19, name=friendly_black, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:57:27 compute-0 systemd[1]: Started libpod-conmon-032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3.scope.
Oct  9 10:57:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a576cd4ac4ee71dc1046c064adb77244c131c1a9fec5cb125d4eac9bfab73000/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a576cd4ac4ee71dc1046c064adb77244c131c1a9fec5cb125d4eac9bfab73000/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a576cd4ac4ee71dc1046c064adb77244c131c1a9fec5cb125d4eac9bfab73000/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:27 compute-0 podman[5275]: 2025-10-09 10:57:27.750035742 +0000 UTC m=+0.109159867 container init 032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3 (image=quay.io/ceph/ceph:v19, name=friendly_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Oct  9 10:57:27 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.izrudc(active, since 1.03046s)
Oct  9 10:57:27 compute-0 podman[5275]: 2025-10-09 10:57:27.755673482 +0000 UTC m=+0.114797597 container start 032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3 (image=quay.io/ceph/ceph:v19, name=friendly_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 10:57:27 compute-0 podman[5275]: 2025-10-09 10:57:27.759503514 +0000 UTC m=+0.118627609 container attach 032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3 (image=quay.io/ceph/ceph:v19, name=friendly_black, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:27 compute-0 podman[5275]: 2025-10-09 10:57:27.667548869 +0000 UTC m=+0.026672984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:27 compute-0 ceph-mon[4705]: from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:27 compute-0 ceph-mon[4705]: from='mgr.14102 192.168.122.100:0/4249878406' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:57:28 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073297152' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:57:28 compute-0 friendly_black[5292]: 
Oct  9 10:57:28 compute-0 friendly_black[5292]: {
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "health": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "status": "HEALTH_OK",
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "checks": {},
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "mutes": []
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "election_epoch": 5,
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "quorum": [
Oct  9 10:57:28 compute-0 friendly_black[5292]:        0
Oct  9 10:57:28 compute-0 friendly_black[5292]:    ],
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "quorum_names": [
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "compute-0"
Oct  9 10:57:28 compute-0 friendly_black[5292]:    ],
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "quorum_age": 9,
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "monmap": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "epoch": 1,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "min_mon_release_name": "squid",
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_mons": 1
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "osdmap": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "epoch": 1,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_osds": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_up_osds": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "osd_up_since": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_in_osds": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "osd_in_since": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_remapped_pgs": 0
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "pgmap": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "pgs_by_state": [],
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_pgs": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_pools": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_objects": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "data_bytes": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "bytes_used": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "bytes_avail": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "bytes_total": 0
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "fsmap": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "epoch": 1,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "btime": "2025-10-09T10:57:16:742582+0000",
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "by_rank": [],
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "up:standby": 0
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "mgrmap": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "available": true,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "num_standbys": 0,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "modules": [
Oct  9 10:57:28 compute-0 friendly_black[5292]:            "iostat",
Oct  9 10:57:28 compute-0 friendly_black[5292]:            "nfs",
Oct  9 10:57:28 compute-0 friendly_black[5292]:            "restful"
Oct  9 10:57:28 compute-0 friendly_black[5292]:        ],
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "services": {}
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "servicemap": {
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "epoch": 1,
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "modified": "2025-10-09T10:57:16.745535+0000",
Oct  9 10:57:28 compute-0 friendly_black[5292]:        "services": {}
Oct  9 10:57:28 compute-0 friendly_black[5292]:    },
Oct  9 10:57:28 compute-0 friendly_black[5292]:    "progress_events": {}
Oct  9 10:57:28 compute-0 friendly_black[5292]: }
Oct  9 10:57:28 compute-0 systemd[1]: libpod-032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3.scope: Deactivated successfully.
Oct  9 10:57:28 compute-0 podman[5275]: 2025-10-09 10:57:28.1702422 +0000 UTC m=+0.529366295 container died 032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3 (image=quay.io/ceph/ceph:v19, name=friendly_black, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a576cd4ac4ee71dc1046c064adb77244c131c1a9fec5cb125d4eac9bfab73000-merged.mount: Deactivated successfully.
Oct  9 10:57:28 compute-0 podman[5275]: 2025-10-09 10:57:28.209521258 +0000 UTC m=+0.568645353 container remove 032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3 (image=quay.io/ceph/ceph:v19, name=friendly_black, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 10:57:28 compute-0 systemd[1]: libpod-conmon-032745612c79401845b9e22254a0d6830a71b5b14b99d215f2d3ef1903acbcd3.scope: Deactivated successfully.
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.271946937 +0000 UTC m=+0.043246536 container create b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f (image=quay.io/ceph/ceph:v19, name=pedantic_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:28 compute-0 systemd[1]: Started libpod-conmon-b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f.scope.
Oct  9 10:57:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec198e46b872f8a74d8e8ca55965e14c7fd3242a5ef5343783a5a98f88c7b17/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec198e46b872f8a74d8e8ca55965e14c7fd3242a5ef5343783a5a98f88c7b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec198e46b872f8a74d8e8ca55965e14c7fd3242a5ef5343783a5a98f88c7b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec198e46b872f8a74d8e8ca55965e14c7fd3242a5ef5343783a5a98f88c7b17/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.333906951 +0000 UTC m=+0.105206580 container init b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f (image=quay.io/ceph/ceph:v19, name=pedantic_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.340210443 +0000 UTC m=+0.111510052 container start b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f (image=quay.io/ceph/ceph:v19, name=pedantic_sammet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.344308545 +0000 UTC m=+0.115608174 container attach b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f (image=quay.io/ceph/ceph:v19, name=pedantic_sammet, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.253119544 +0000 UTC m=+0.024419183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  9 10:57:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1047811815' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 10:57:28 compute-0 pedantic_sammet[5347]: 
Oct  9 10:57:28 compute-0 pedantic_sammet[5347]: [global]
Oct  9 10:57:28 compute-0 pedantic_sammet[5347]: #011fsid = e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:57:28 compute-0 pedantic_sammet[5347]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  9 10:57:28 compute-0 systemd[1]: libpod-b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f.scope: Deactivated successfully.
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.695048548 +0000 UTC m=+0.466348157 container died b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f (image=quay.io/ceph/ceph:v19, name=pedantic_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ec198e46b872f8a74d8e8ca55965e14c7fd3242a5ef5343783a5a98f88c7b17-merged.mount: Deactivated successfully.
Oct  9 10:57:28 compute-0 podman[5330]: 2025-10-09 10:57:28.727800827 +0000 UTC m=+0.499100436 container remove b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f (image=quay.io/ceph/ceph:v19, name=pedantic_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 10:57:28 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:28 compute-0 systemd[1]: libpod-conmon-b85bf597e95d821ccb06a8a99e0ce833d4bce2f895b254805cc1f9744a8ecd5f.scope: Deactivated successfully.
Oct  9 10:57:28 compute-0 podman[5386]: 2025-10-09 10:57:28.781793376 +0000 UTC m=+0.037103269 container create ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148 (image=quay.io/ceph/ceph:v19, name=eloquent_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:28 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1047811815' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 10:57:28 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.izrudc(active, since 2s)
Oct  9 10:57:28 compute-0 systemd[1]: Started libpod-conmon-ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148.scope.
Oct  9 10:57:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a1222386eac38af61de91b7d58d5373475fb2c304236b963348a962c4ec0078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a1222386eac38af61de91b7d58d5373475fb2c304236b963348a962c4ec0078/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a1222386eac38af61de91b7d58d5373475fb2c304236b963348a962c4ec0078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:28 compute-0 podman[5386]: 2025-10-09 10:57:28.845736324 +0000 UTC m=+0.101046217 container init ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148 (image=quay.io/ceph/ceph:v19, name=eloquent_chaum, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  9 10:57:28 compute-0 podman[5386]: 2025-10-09 10:57:28.850219128 +0000 UTC m=+0.105529021 container start ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148 (image=quay.io/ceph/ceph:v19, name=eloquent_chaum, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:28 compute-0 podman[5386]: 2025-10-09 10:57:28.853168992 +0000 UTC m=+0.108478885 container attach ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148 (image=quay.io/ceph/ceph:v19, name=eloquent_chaum, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:28 compute-0 podman[5386]: 2025-10-09 10:57:28.766682103 +0000 UTC m=+0.021992016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct  9 10:57:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1336853512' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  9 10:57:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1336853512' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  1: '-n'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  2: 'mgr.compute-0.izrudc'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  3: '-f'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  4: '--setuser'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  5: 'ceph'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  6: '--setgroup'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  7: 'ceph'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr respawn  exe_path /proc/self/exe
Oct  9 10:57:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.izrudc(active, since 3s)
Oct  9 10:57:29 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1336853512' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  9 10:57:29 compute-0 systemd[1]: libpod-ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148.scope: Deactivated successfully.
Oct  9 10:57:29 compute-0 conmon[5402]: conmon ea8a635a532dc713333c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148.scope/container/memory.events
Oct  9 10:57:29 compute-0 podman[5386]: 2025-10-09 10:57:29.865209405 +0000 UTC m=+1.120519298 container died ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148 (image=quay.io/ceph/ceph:v19, name=eloquent_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setuser ceph since I am not root
Oct  9 10:57:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setgroup ceph since I am not root
Oct  9 10:57:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a1222386eac38af61de91b7d58d5373475fb2c304236b963348a962c4ec0078-merged.mount: Deactivated successfully.
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: pidfile_write: ignore empty --pid-file
Oct  9 10:57:29 compute-0 podman[5386]: 2025-10-09 10:57:29.963968339 +0000 UTC m=+1.219278232 container remove ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148 (image=quay.io/ceph/ceph:v19, name=eloquent_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 10:57:29 compute-0 systemd[1]: libpod-conmon-ea8a635a532dc713333c0e55f869ec31998ee120c07df9659dc9e673d8959148.scope: Deactivated successfully.
Oct  9 10:57:29 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'alerts'
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.047417891 +0000 UTC m=+0.059778435 container create 3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c (image=quay.io/ceph/ceph:v19, name=goofy_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 10:57:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:30.083+0000 7f57ad548140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'balancer'
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.008711712 +0000 UTC m=+0.021072266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:30 compute-0 systemd[1]: Started libpod-conmon-3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c.scope.
Oct  9 10:57:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99b2810bba678374c600aff477573247076636bd63459d95d43738acbfb430a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99b2810bba678374c600aff477573247076636bd63459d95d43738acbfb430a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99b2810bba678374c600aff477573247076636bd63459d95d43738acbfb430a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.150557014 +0000 UTC m=+0.162917558 container init 3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c (image=quay.io/ceph/ceph:v19, name=goofy_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.155488433 +0000 UTC m=+0.167848957 container start 3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c (image=quay.io/ceph/ceph:v19, name=goofy_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.164963196 +0000 UTC m=+0.177323750 container attach 3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c (image=quay.io/ceph/ceph:v19, name=goofy_khorana, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 10:57:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:30.173+0000 7f57ad548140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'cephadm'
Oct  9 10:57:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:57:30 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910434682' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:57:30 compute-0 goofy_khorana[5478]: {
Oct  9 10:57:30 compute-0 goofy_khorana[5478]:    "epoch": 5,
Oct  9 10:57:30 compute-0 goofy_khorana[5478]:    "available": true,
Oct  9 10:57:30 compute-0 goofy_khorana[5478]:    "active_name": "compute-0.izrudc",
Oct  9 10:57:30 compute-0 goofy_khorana[5478]:    "num_standby": 0
Oct  9 10:57:30 compute-0 goofy_khorana[5478]: }
Oct  9 10:57:30 compute-0 systemd[1]: libpod-3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c.scope: Deactivated successfully.
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.562626882 +0000 UTC m=+0.574987406 container died 3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c (image=quay.io/ceph/ceph:v19, name=goofy_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 10:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b99b2810bba678374c600aff477573247076636bd63459d95d43738acbfb430a-merged.mount: Deactivated successfully.
Oct  9 10:57:30 compute-0 podman[5460]: 2025-10-09 10:57:30.658266935 +0000 UTC m=+0.670627459 container remove 3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c (image=quay.io/ceph/ceph:v19, name=goofy_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:57:30 compute-0 systemd[1]: libpod-conmon-3f232ab521db05cbfd6b8f712bdaead319a5c666ff11b481092b4268e2df8b3c.scope: Deactivated successfully.
Oct  9 10:57:30 compute-0 podman[5528]: 2025-10-09 10:57:30.728860827 +0000 UTC m=+0.053160934 container create c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374 (image=quay.io/ceph/ceph:v19, name=cranky_solomon, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:30 compute-0 systemd[1]: Started libpod-conmon-c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374.scope.
Oct  9 10:57:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886aae851b9a87e12f0c6dcf8bd013de931b14d36afbfd6a452183348feeddb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886aae851b9a87e12f0c6dcf8bd013de931b14d36afbfd6a452183348feeddb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886aae851b9a87e12f0c6dcf8bd013de931b14d36afbfd6a452183348feeddb5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:30 compute-0 podman[5528]: 2025-10-09 10:57:30.698578087 +0000 UTC m=+0.022878254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:30 compute-0 podman[5528]: 2025-10-09 10:57:30.796147872 +0000 UTC m=+0.120447999 container init c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374 (image=quay.io/ceph/ceph:v19, name=cranky_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:30 compute-0 podman[5528]: 2025-10-09 10:57:30.800842612 +0000 UTC m=+0.125142719 container start c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374 (image=quay.io/ceph/ceph:v19, name=cranky_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 10:57:30 compute-0 podman[5528]: 2025-10-09 10:57:30.817498875 +0000 UTC m=+0.141799002 container attach c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374 (image=quay.io/ceph/ceph:v19, name=cranky_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 10:57:30 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1336853512' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'crash'
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 10:57:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:30.994+0000 7f57ad548140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 10:57:30 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'dashboard'
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'devicehealth'
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 10:57:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:31.679+0000 7f57ad548140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 10:57:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 10:57:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 10:57:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  from numpy import show_config as show_numpy_config
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 10:57:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:31.849+0000 7f57ad548140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'influx'
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 10:57:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:31.926+0000 7f57ad548140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'insights'
Oct  9 10:57:31 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'iostat'
Oct  9 10:57:32 compute-0 ceph-mgr[4997]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 10:57:32 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:32.067+0000 7f57ad548140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 10:57:32 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'k8sevents'
Oct  9 10:57:32 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'localpool'
Oct  9 10:57:32 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 10:57:32 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mirroring'
Oct  9 10:57:32 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'nfs'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.071+0000 7f57ad548140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'orchestrator'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.277+0000 7f57ad548140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.350+0000 7f57ad548140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_support'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.417+0000 7f57ad548140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.497+0000 7f57ad548140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'progress'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.575+0000 7f57ad548140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'prometheus'
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:33.940+0000 7f57ad548140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 10:57:33 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rbd_support'
Oct  9 10:57:34 compute-0 ceph-mgr[4997]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 10:57:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:34.042+0000 7f57ad548140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 10:57:34 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'restful'
Oct  9 10:57:34 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rgw'
Oct  9 10:57:34 compute-0 ceph-mgr[4997]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 10:57:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:34.467+0000 7f57ad548140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 10:57:34 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rook'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.058+0000 7f57ad548140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'selftest'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.135+0000 7f57ad548140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'snap_schedule'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.219+0000 7f57ad548140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'stats'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'status'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.371+0000 7f57ad548140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telegraf'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.447+0000 7f57ad548140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telemetry'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.611+0000 7f57ad548140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:35.833+0000 7f57ad548140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 10:57:35 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'volumes'
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'zabbix'
Oct  9 10:57:36 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:36.109+0000 7f57ad548140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 10:57:36 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:57:36.184+0000 7f57ad548140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izrudc restarted
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izrudc
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: ms_deliver_dispatch: unhandled message 0x55c723010d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr handle_mgr_map Activating!
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.izrudc(active, starting, since 0.0337899s)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr handle_mgr_map I am now activating
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: balancer
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [balancer INFO root] Starting
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Manager daemon compute-0.izrudc is now available
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_10:57:36
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [balancer INFO root] do_upmap
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [balancer INFO root] No pools available
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  9 10:57:36 compute-0 ceph-mon[4705]: Active manager daemon compute-0.izrudc restarted
Oct  9 10:57:36 compute-0 ceph-mon[4705]: Activating manager daemon compute-0.izrudc
Oct  9 10:57:36 compute-0 ceph-mon[4705]: Manager daemon compute-0.izrudc is now available
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: cephadm
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: crash
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: devicehealth
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Starting
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: iostat
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: nfs
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: orchestrator
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: pg_autoscaler
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: progress
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [progress INFO root] Loading...
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [progress INFO root] No stored events to load
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded [] historic events
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] recovery thread starting
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] starting setup
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: rbd_support
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: restful
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: status
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [restful WARNING root] server not running: no certificate configured
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: telemetry
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] PerfHandler: starting
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TaskHandler: starting
Oct  9 10:57:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"} v 0)
Oct  9 10:57:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] setup complete
Oct  9 10:57:36 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: volumes
Oct  9 10:57:37 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.izrudc(active, since 1.04226s)
Oct  9 10:57:37 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  9 10:57:37 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  9 10:57:37 compute-0 cranky_solomon[5545]: {
Oct  9 10:57:37 compute-0 cranky_solomon[5545]:    "mgrmap_epoch": 7,
Oct  9 10:57:37 compute-0 cranky_solomon[5545]:    "initialized": true
Oct  9 10:57:37 compute-0 cranky_solomon[5545]: }
Oct  9 10:57:37 compute-0 systemd[1]: libpod-c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374.scope: Deactivated successfully.
Oct  9 10:57:37 compute-0 podman[5528]: 2025-10-09 10:57:37.254103742 +0000 UTC m=+6.578403849 container died c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374 (image=quay.io/ceph/ceph:v19, name=cranky_solomon, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-886aae851b9a87e12f0c6dcf8bd013de931b14d36afbfd6a452183348feeddb5-merged.mount: Deactivated successfully.
Oct  9 10:57:37 compute-0 ceph-mon[4705]: Found migration_current of "None". Setting to last migration.
Oct  9 10:57:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 10:57:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 10:57:37 compute-0 podman[5528]: 2025-10-09 10:57:37.309546398 +0000 UTC m=+6.633846505 container remove c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374 (image=quay.io/ceph/ceph:v19, name=cranky_solomon, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:37 compute-0 systemd[1]: libpod-conmon-c9e885655e0461ecc3151e1ae3acea390b52a7c498bdfe33fb3a238b381b7374.scope: Deactivated successfully.
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.369131047 +0000 UTC m=+0.040305183 container create 3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65 (image=quay.io/ceph/ceph:v19, name=sleepy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:57:37 compute-0 systemd[1]: Started libpod-conmon-3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65.scope.
Oct  9 10:57:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccac1f923ca7de227693418cceadbff27db216184b10dab30f2eb1238d16753/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccac1f923ca7de227693418cceadbff27db216184b10dab30f2eb1238d16753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccac1f923ca7de227693418cceadbff27db216184b10dab30f2eb1238d16753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.348572068 +0000 UTC m=+0.019746234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.459017866 +0000 UTC m=+0.130192032 container init 3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65 (image=quay.io/ceph/ceph:v19, name=sleepy_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.464632425 +0000 UTC m=+0.135806581 container start 3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65 (image=quay.io/ceph/ceph:v19, name=sleepy_shtern, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.468634103 +0000 UTC m=+0.139808239 container attach 3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65 (image=quay.io/ceph/ceph:v19, name=sleepy_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 10:57:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct  9 10:57:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct  9 10:57:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:37 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct  9 10:57:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 10:57:37 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 10:57:37 compute-0 systemd[1]: libpod-3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65.scope: Deactivated successfully.
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.881021491 +0000 UTC m=+0.552195637 container died 3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65 (image=quay.io/ceph/ceph:v19, name=sleepy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ccac1f923ca7de227693418cceadbff27db216184b10dab30f2eb1238d16753-merged.mount: Deactivated successfully.
Oct  9 10:57:37 compute-0 podman[5694]: 2025-10-09 10:57:37.918013176 +0000 UTC m=+0.589187302 container remove 3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65 (image=quay.io/ceph/ceph:v19, name=sleepy_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 10:57:37 compute-0 systemd[1]: libpod-conmon-3bcd87404fad09e965db5961283a4637923fc5f1c79d986ad9899e0e4abf9a65.scope: Deactivated successfully.
Oct  9 10:57:37 compute-0 podman[5749]: 2025-10-09 10:57:37.973979118 +0000 UTC m=+0.039056281 container create 52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a (image=quay.io/ceph/ceph:v19, name=ecstatic_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:57:38 compute-0 systemd[1]: Started libpod-conmon-52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a.scope.
Oct  9 10:57:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eda593670650f9123d7fee7377af867036451708e88605daa59f34836a35ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eda593670650f9123d7fee7377af867036451708e88605daa59f34836a35ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eda593670650f9123d7fee7377af867036451708e88605daa59f34836a35ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 podman[5749]: 2025-10-09 10:57:38.051721888 +0000 UTC m=+0.116799071 container init 52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a (image=quay.io/ceph/ceph:v19, name=ecstatic_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:38 compute-0 podman[5749]: 2025-10-09 10:57:37.956347003 +0000 UTC m=+0.021424176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:38 compute-0 podman[5749]: 2025-10-09 10:57:38.056575663 +0000 UTC m=+0.121652826 container start 52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a (image=quay.io/ceph/ceph:v19, name=ecstatic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:57:38 compute-0 podman[5749]: 2025-10-09 10:57:38.061225173 +0000 UTC m=+0.126302376 container attach 52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a (image=quay.io/ceph/ceph:v19, name=ecstatic_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct  9 10:57:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO root] Set ssh ssh_user
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  9 10:57:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct  9 10:57:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO root] Set ssh ssh_config
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  9 10:57:38 compute-0 ecstatic_kapitsa[5765]: ssh user set to ceph-admin. sudo will be used
Oct  9 10:57:38 compute-0 systemd[1]: libpod-52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a.scope: Deactivated successfully.
Oct  9 10:57:38 compute-0 podman[5749]: 2025-10-09 10:57:38.428255958 +0000 UTC m=+0.493333121 container died 52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a (image=quay.io/ceph/ceph:v19, name=ecstatic_kapitsa, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6eda593670650f9123d7fee7377af867036451708e88605daa59f34836a35ea-merged.mount: Deactivated successfully.
Oct  9 10:57:38 compute-0 podman[5749]: 2025-10-09 10:57:38.483151756 +0000 UTC m=+0.548228919 container remove 52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a (image=quay.io/ceph/ceph:v19, name=ecstatic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:38 compute-0 systemd[1]: libpod-conmon-52448af849c2028a0cb2c716e7605195ccedd429f1fbeb9f8dc02c6db2d2865a.scope: Deactivated successfully.
Oct  9 10:57:38 compute-0 podman[5804]: 2025-10-09 10:57:38.543090106 +0000 UTC m=+0.039189416 container create ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a (image=quay.io/ceph/ceph:v19, name=flamboyant_mendel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:10:57:38] ENGINE Bus STARTING
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:10:57:38] ENGINE Bus STARTING
Oct  9 10:57:38 compute-0 systemd[1]: Started libpod-conmon-ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a.scope.
Oct  9 10:57:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1467ea4f265f208a0794e301d33f79ba8220e65d5c1e8316aa91f70c5fc4143a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1467ea4f265f208a0794e301d33f79ba8220e65d5c1e8316aa91f70c5fc4143a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1467ea4f265f208a0794e301d33f79ba8220e65d5c1e8316aa91f70c5fc4143a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1467ea4f265f208a0794e301d33f79ba8220e65d5c1e8316aa91f70c5fc4143a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1467ea4f265f208a0794e301d33f79ba8220e65d5c1e8316aa91f70c5fc4143a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:38 compute-0 podman[5804]: 2025-10-09 10:57:38.601037801 +0000 UTC m=+0.097137111 container init ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a (image=quay.io/ceph/ceph:v19, name=flamboyant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:57:38 compute-0 podman[5804]: 2025-10-09 10:57:38.609983458 +0000 UTC m=+0.106082778 container start ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a (image=quay.io/ceph/ceph:v19, name=flamboyant_mendel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:38 compute-0 podman[5804]: 2025-10-09 10:57:38.614177652 +0000 UTC m=+0.110276982 container attach ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a (image=quay.io/ceph/ceph:v19, name=flamboyant_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct  9 10:57:38 compute-0 podman[5804]: 2025-10-09 10:57:38.524071716 +0000 UTC m=+0.020171046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:10:57:38] ENGINE Serving on http://192.168.122.100:8765
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:10:57:38] ENGINE Serving on http://192.168.122.100:8765
Oct  9 10:57:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019924100 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:10:57:38] ENGINE Serving on https://192.168.122.100:7150
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:10:57:38] ENGINE Serving on https://192.168.122.100:7150
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:10:57:38] ENGINE Bus STARTED
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:10:57:38] ENGINE Bus STARTED
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:10:57:38] ENGINE Client ('192.168.122.100', 40930) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:10:57:38] ENGINE Client ('192.168.122.100', 40930) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 10:57:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 10:57:38 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 10:57:38 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.izrudc(active, since 2s)
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct  9 10:57:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: [cephadm INFO root] Set ssh private key
Oct  9 10:57:38 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  9 10:57:38 compute-0 systemd[1]: libpod-ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a.scope: Deactivated successfully.
Oct  9 10:57:38 compute-0 podman[5804]: 2025-10-09 10:57:38.971879108 +0000 UTC m=+0.467978458 container died ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a (image=quay.io/ceph/ceph:v19, name=flamboyant_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1467ea4f265f208a0794e301d33f79ba8220e65d5c1e8316aa91f70c5fc4143a-merged.mount: Deactivated successfully.
Oct  9 10:57:39 compute-0 podman[5804]: 2025-10-09 10:57:39.014951478 +0000 UTC m=+0.511050788 container remove ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a (image=quay.io/ceph/ceph:v19, name=flamboyant_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 10:57:39 compute-0 systemd[1]: libpod-conmon-ff8584c1390285f80895982d82f3170c1f4b76dbebba4f7d448cbf6520995e1a.scope: Deactivated successfully.
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.070464026 +0000 UTC m=+0.038828415 container create 8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4 (image=quay.io/ceph/ceph:v19, name=priceless_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:39 compute-0 systemd[1]: Started libpod-conmon-8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4.scope.
Oct  9 10:57:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2cd60f9a5a61f58d3365d0cfb955dae54283ae6a4e02e50ba45988f7f699ab/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2cd60f9a5a61f58d3365d0cfb955dae54283ae6a4e02e50ba45988f7f699ab/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2cd60f9a5a61f58d3365d0cfb955dae54283ae6a4e02e50ba45988f7f699ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2cd60f9a5a61f58d3365d0cfb955dae54283ae6a4e02e50ba45988f7f699ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2cd60f9a5a61f58d3365d0cfb955dae54283ae6a4e02e50ba45988f7f699ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.052069397 +0000 UTC m=+0.020433806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.200588653 +0000 UTC m=+0.168953072 container init 8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4 (image=quay.io/ceph/ceph:v19, name=priceless_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.205745429 +0000 UTC m=+0.174109808 container start 8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4 (image=quay.io/ceph/ceph:v19, name=priceless_meitner, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.2214074 +0000 UTC m=+0.189771809 container attach 8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4 (image=quay.io/ceph/ceph:v19, name=priceless_meitner, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 10:57:39 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct  9 10:57:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:39 compute-0 ceph-mgr[4997]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  9 10:57:39 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  9 10:57:39 compute-0 systemd[1]: libpod-8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4.scope: Deactivated successfully.
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.584466919 +0000 UTC m=+0.552831318 container died 8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4 (image=quay.io/ceph/ceph:v19, name=priceless_meitner, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:39 compute-0 ceph-mon[4705]: Set ssh ssh_user
Oct  9 10:57:39 compute-0 ceph-mon[4705]: Set ssh ssh_config
Oct  9 10:57:39 compute-0 ceph-mon[4705]: ssh user set to ceph-admin. sudo will be used
Oct  9 10:57:39 compute-0 ceph-mon[4705]: [09/Oct/2025:10:57:38] ENGINE Bus STARTING
Oct  9 10:57:39 compute-0 ceph-mon[4705]: [09/Oct/2025:10:57:38] ENGINE Serving on http://192.168.122.100:8765
Oct  9 10:57:39 compute-0 ceph-mon[4705]: [09/Oct/2025:10:57:38] ENGINE Serving on https://192.168.122.100:7150
Oct  9 10:57:39 compute-0 ceph-mon[4705]: [09/Oct/2025:10:57:38] ENGINE Bus STARTED
Oct  9 10:57:39 compute-0 ceph-mon[4705]: [09/Oct/2025:10:57:38] ENGINE Client ('192.168.122.100', 40930) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 10:57:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:39 compute-0 ceph-mon[4705]: Set ssh ssh_identity_key
Oct  9 10:57:39 compute-0 ceph-mon[4705]: Set ssh private key
Oct  9 10:57:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f2cd60f9a5a61f58d3365d0cfb955dae54283ae6a4e02e50ba45988f7f699ab-merged.mount: Deactivated successfully.
Oct  9 10:57:39 compute-0 podman[5879]: 2025-10-09 10:57:39.706164856 +0000 UTC m=+0.674529245 container remove 8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4 (image=quay.io/ceph/ceph:v19, name=priceless_meitner, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:39 compute-0 systemd[1]: libpod-conmon-8da96050e2791d552fefec22943a5505dd70b57f33fab3a5f1356cb4452546e4.scope: Deactivated successfully.
Oct  9 10:57:39 compute-0 podman[5933]: 2025-10-09 10:57:39.768514703 +0000 UTC m=+0.043542946 container create a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219 (image=quay.io/ceph/ceph:v19, name=keen_einstein, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:57:39 compute-0 systemd[1]: Started libpod-conmon-a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219.scope.
Oct  9 10:57:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46009f3bfb3a86708b00897c4cec46a66dd6b235fdcee6dd345490a9e631ead0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46009f3bfb3a86708b00897c4cec46a66dd6b235fdcee6dd345490a9e631ead0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46009f3bfb3a86708b00897c4cec46a66dd6b235fdcee6dd345490a9e631ead0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:39 compute-0 podman[5933]: 2025-10-09 10:57:39.746827729 +0000 UTC m=+0.021855982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:39 compute-0 podman[5933]: 2025-10-09 10:57:39.846520981 +0000 UTC m=+0.121549244 container init a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219 (image=quay.io/ceph/ceph:v19, name=keen_einstein, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:39 compute-0 podman[5933]: 2025-10-09 10:57:39.851315155 +0000 UTC m=+0.126343398 container start a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219 (image=quay.io/ceph/ceph:v19, name=keen_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 10:57:39 compute-0 podman[5933]: 2025-10-09 10:57:39.868798045 +0000 UTC m=+0.143826338 container attach a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219 (image=quay.io/ceph/ceph:v19, name=keen_einstein, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:40 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:40 compute-0 keen_einstein[5949]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeETEGT1LzpHp0b9qJeWr0287dSg1nSbNHTfBK1+GGFl0+ompJ4nMSdwEjo0lzzJbpXv+7v25lJOSPvHmvee5MuuLbOIUZ3SH5GiitYlXOvBe2oWbIRxGycJSoj8X7UiFU30coc2FcyQ7lIR9rgcWvH+sMhQpDpXXl8ni5OpG+Hc/Tz5i9QFQLu5pYd3qVGBeLkJO/cQS4PFbK1s71ehw5ytrnjUSzKC3nxK2SmWN7L5mKnoTYmt9nijZtUUHGqEh9hBovqUHFXN++ZyLCW9zX2QIkJhFOAHJqXuZdHEsQQMwHmeTbnwlDay7kUbUtuurK2Kf5uLFGbdC+Fm1y3h6j63SUY+oujZ8wBLDElbBKZ0Hg2Xe80LUkENgrHI1VnOIZzMsMdd+fm2lSse+DHNGQlxlPrGjFebgV4uK5mwpzZmSr01NvikqHcZeJ+TWAfs70PySo7/pKzcqwzGzXtl4HWI1hM2kJ5iTDW4bn6xj7UX7d+3AFtN+Zofy8EPTuCkc= zuul@controller
Oct  9 10:57:40 compute-0 systemd[1]: libpod-a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219.scope: Deactivated successfully.
Oct  9 10:57:40 compute-0 podman[5933]: 2025-10-09 10:57:40.199858858 +0000 UTC m=+0.474887111 container died a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219 (image=quay.io/ceph/ceph:v19, name=keen_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 10:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-46009f3bfb3a86708b00897c4cec46a66dd6b235fdcee6dd345490a9e631ead0-merged.mount: Deactivated successfully.
Oct  9 10:57:40 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:40 compute-0 podman[5933]: 2025-10-09 10:57:40.263658361 +0000 UTC m=+0.538686604 container remove a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219 (image=quay.io/ceph/ceph:v19, name=keen_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:40 compute-0 systemd[1]: libpod-conmon-a11e0331c1b71932ab6d2ed6e3ed40df83acaa379755a08fbaa1d4bd735e7219.scope: Deactivated successfully.
Oct  9 10:57:40 compute-0 podman[5986]: 2025-10-09 10:57:40.315270485 +0000 UTC m=+0.026139539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:40 compute-0 podman[5986]: 2025-10-09 10:57:40.473232173 +0000 UTC m=+0.184101197 container create eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4 (image=quay.io/ceph/ceph:v19, name=adoring_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 10:57:40 compute-0 systemd[1]: Started libpod-conmon-eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4.scope.
Oct  9 10:57:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7e8bcc8e07ff94b8fb68862d1ea72a8ef9c72d3b2e8f9b92328c549d8ff37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7e8bcc8e07ff94b8fb68862d1ea72a8ef9c72d3b2e8f9b92328c549d8ff37/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7e8bcc8e07ff94b8fb68862d1ea72a8ef9c72d3b2e8f9b92328c549d8ff37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:40 compute-0 podman[5986]: 2025-10-09 10:57:40.711486064 +0000 UTC m=+0.422355108 container init eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4 (image=quay.io/ceph/ceph:v19, name=adoring_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:57:40 compute-0 podman[5986]: 2025-10-09 10:57:40.717628761 +0000 UTC m=+0.428497785 container start eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4 (image=quay.io/ceph/ceph:v19, name=adoring_kapitsa, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:40 compute-0 ceph-mon[4705]: Set ssh ssh_identity_pub
Oct  9 10:57:40 compute-0 podman[5986]: 2025-10-09 10:57:40.733077225 +0000 UTC m=+0.443946269 container attach eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4 (image=quay.io/ceph/ceph:v19, name=adoring_kapitsa, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:41 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:41 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 10:57:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 10:57:41 compute-0 systemd-logind[846]: New session 6 of user ceph-admin.
Oct  9 10:57:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 10:57:41 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  9 10:57:41 compute-0 systemd-logind[846]: New session 8 of user ceph-admin.
Oct  9 10:57:41 compute-0 systemd[6033]: Queued start job for default target Main User Target.
Oct  9 10:57:41 compute-0 systemd[6033]: Created slice User Application Slice.
Oct  9 10:57:41 compute-0 systemd[6033]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:57:41 compute-0 systemd[6033]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:57:41 compute-0 systemd[6033]: Reached target Paths.
Oct  9 10:57:41 compute-0 systemd[6033]: Reached target Timers.
Oct  9 10:57:41 compute-0 systemd[6033]: Starting D-Bus User Message Bus Socket...
Oct  9 10:57:41 compute-0 systemd[6033]: Starting Create User's Volatile Files and Directories...
Oct  9 10:57:41 compute-0 systemd[6033]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:57:41 compute-0 systemd[6033]: Reached target Sockets.
Oct  9 10:57:41 compute-0 systemd[6033]: Finished Create User's Volatile Files and Directories.
Oct  9 10:57:41 compute-0 systemd[6033]: Reached target Basic System.
Oct  9 10:57:41 compute-0 systemd[6033]: Reached target Main User Target.
Oct  9 10:57:41 compute-0 systemd[6033]: Startup finished in 144ms.
Oct  9 10:57:41 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  9 10:57:41 compute-0 systemd[1]: Started Session 6 of User ceph-admin.
Oct  9 10:57:41 compute-0 systemd[1]: Started Session 8 of User ceph-admin.
Oct  9 10:57:41 compute-0 systemd-logind[846]: New session 9 of user ceph-admin.
Oct  9 10:57:41 compute-0 systemd[1]: Started Session 9 of User ceph-admin.
Oct  9 10:57:42 compute-0 systemd-logind[846]: New session 10 of user ceph-admin.
Oct  9 10:57:42 compute-0 systemd[1]: Started Session 10 of User ceph-admin.
Oct  9 10:57:42 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  9 10:57:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  9 10:57:42 compute-0 systemd-logind[846]: New session 11 of user ceph-admin.
Oct  9 10:57:42 compute-0 systemd[1]: Started Session 11 of User ceph-admin.
Oct  9 10:57:42 compute-0 systemd-logind[846]: New session 12 of user ceph-admin.
Oct  9 10:57:42 compute-0 systemd[1]: Started Session 12 of User ceph-admin.
Oct  9 10:57:43 compute-0 systemd-logind[846]: New session 13 of user ceph-admin.
Oct  9 10:57:43 compute-0 systemd[1]: Started Session 13 of User ceph-admin.
Oct  9 10:57:43 compute-0 systemd-logind[846]: New session 14 of user ceph-admin.
Oct  9 10:57:43 compute-0 systemd[1]: Started Session 14 of User ceph-admin.
Oct  9 10:57:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053061 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:57:43 compute-0 systemd-logind[846]: New session 15 of user ceph-admin.
Oct  9 10:57:43 compute-0 systemd[1]: Started Session 15 of User ceph-admin.
Oct  9 10:57:43 compute-0 ceph-mon[4705]: Deploying cephadm binary to compute-0
Oct  9 10:57:44 compute-0 systemd-logind[846]: New session 16 of user ceph-admin.
Oct  9 10:57:44 compute-0 systemd[1]: Started Session 16 of User ceph-admin.
Oct  9 10:57:44 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:45 compute-0 systemd-logind[846]: New session 17 of user ceph-admin.
Oct  9 10:57:45 compute-0 systemd[1]: Started Session 17 of User ceph-admin.
Oct  9 10:57:45 compute-0 systemd-logind[846]: New session 18 of user ceph-admin.
Oct  9 10:57:45 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Oct  9 10:57:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:57:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:45 compute-0 ceph-mgr[4997]: [cephadm INFO root] Added host compute-0
Oct  9 10:57:45 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  9 10:57:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 10:57:45 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 10:57:45 compute-0 adoring_kapitsa[6003]: Added host 'compute-0' with addr '192.168.122.100'
Oct  9 10:57:45 compute-0 systemd[1]: libpod-eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4.scope: Deactivated successfully.
Oct  9 10:57:45 compute-0 podman[5986]: 2025-10-09 10:57:45.906725655 +0000 UTC m=+5.617594679 container died eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4 (image=quay.io/ceph/ceph:v19, name=adoring_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-31c7e8bcc8e07ff94b8fb68862d1ea72a8ef9c72d3b2e8f9b92328c549d8ff37-merged.mount: Deactivated successfully.
Oct  9 10:57:45 compute-0 podman[5986]: 2025-10-09 10:57:45.971333454 +0000 UTC m=+5.682202498 container remove eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4 (image=quay.io/ceph/ceph:v19, name=adoring_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:45 compute-0 systemd[1]: libpod-conmon-eeb690cd0a1efc9df9b1a910c2b10a0cbe92b63c103ebda564007852dca760d4.scope: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.064743186 +0000 UTC m=+0.072639897 container create c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3 (image=quay.io/ceph/ceph:v19, name=optimistic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:46 compute-0 systemd[1]: Started libpod-conmon-c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3.scope.
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.021604944 +0000 UTC m=+0.029501715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6e61b672d5a0ee9e707e9fdbfc04986dc5a90968429489b6ef2c39ac579ad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6e61b672d5a0ee9e707e9fdbfc04986dc5a90968429489b6ef2c39ac579ad9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6e61b672d5a0ee9e707e9fdbfc04986dc5a90968429489b6ef2c39ac579ad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.171896928 +0000 UTC m=+0.179793659 container init c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3 (image=quay.io/ceph/ceph:v19, name=optimistic_gould, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.17914271 +0000 UTC m=+0.187039421 container start c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3 (image=quay.io/ceph/ceph:v19, name=optimistic_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.190986029 +0000 UTC m=+0.198882740 container attach c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3 (image=quay.io/ceph/ceph:v19, name=optimistic_gould, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  9 10:57:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 10:57:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 optimistic_gould[6465]: Scheduled mon update...
Oct  9 10:57:46 compute-0 systemd[1]: libpod-c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3.scope: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.565128062 +0000 UTC m=+0.573024773 container died c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3 (image=quay.io/ceph/ceph:v19, name=optimistic_gould, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6e61b672d5a0ee9e707e9fdbfc04986dc5a90968429489b6ef2c39ac579ad9-merged.mount: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6442]: 2025-10-09 10:57:46.660544288 +0000 UTC m=+0.668440999 container remove c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3 (image=quay.io/ceph/ceph:v19, name=optimistic_gould, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:57:46 compute-0 systemd[1]: libpod-conmon-c3cd5d5c7d87241ad9e9612617504b8521085eb3639b00d7070ea67554616db3.scope: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6527]: 2025-10-09 10:57:46.719738664 +0000 UTC m=+0.041051315 container create 8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44 (image=quay.io/ceph/ceph:v19, name=peaceful_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 10:57:46 compute-0 podman[6527]: 2025-10-09 10:57:46.696804659 +0000 UTC m=+0.018117300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:46 compute-0 systemd[1]: Started libpod-conmon-8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44.scope.
Oct  9 10:57:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f997a94ed7896c839752453ddf9168e21f3f608e03dd4749b7de19f9a0ad8fe8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f997a94ed7896c839752453ddf9168e21f3f608e03dd4749b7de19f9a0ad8fe8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f997a94ed7896c839752453ddf9168e21f3f608e03dd4749b7de19f9a0ad8fe8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 podman[6527]: 2025-10-09 10:57:46.872770985 +0000 UTC m=+0.194083616 container init 8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44 (image=quay.io/ceph/ceph:v19, name=peaceful_poitras, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:57:46 compute-0 podman[6527]: 2025-10-09 10:57:46.878788058 +0000 UTC m=+0.200100689 container start 8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44 (image=quay.io/ceph/ceph:v19, name=peaceful_poitras, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:46 compute-0 podman[6527]: 2025-10-09 10:57:46.884433899 +0000 UTC m=+0.205746560 container attach 8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44 (image=quay.io/ceph/ceph:v19, name=peaceful_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Oct  9 10:57:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 ceph-mon[4705]: Added host compute-0
Oct  9 10:57:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:47 compute-0 podman[6480]: 2025-10-09 10:57:47.002477359 +0000 UTC m=+0.734921849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.101707058 +0000 UTC m=+0.035158418 container create 9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1 (image=quay.io/ceph/ceph:v19, name=epic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:47 compute-0 systemd[1]: Started libpod-conmon-9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1.scope.
Oct  9 10:57:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.16578342 +0000 UTC m=+0.099234800 container init 9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1 (image=quay.io/ceph/ceph:v19, name=epic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.170760059 +0000 UTC m=+0.104211419 container start 9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1 (image=quay.io/ceph/ceph:v19, name=epic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.174319893 +0000 UTC m=+0.107771253 container attach 9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1 (image=quay.io/ceph/ceph:v19, name=epic_jackson, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.084689113 +0000 UTC m=+0.018140493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:47 compute-0 epic_jackson[6596]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct  9 10:57:47 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:47 compute-0 systemd[1]: libpod-9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  9 10:57:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  9 10:57:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.26228131 +0000 UTC m=+0.195732690 container died 9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1 (image=quay.io/ceph/ceph:v19, name=epic_jackson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:47 compute-0 peaceful_poitras[6543]: Scheduled mgr update...
Oct  9 10:57:47 compute-0 systemd[1]: libpod-8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e5682de45e01d2d9b285c5de43947ec6fcfdc9011c89157d3008f4ee0ebd1fe-merged.mount: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[6580]: 2025-10-09 10:57:47.306202836 +0000 UTC m=+0.239654196 container remove 9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1 (image=quay.io/ceph/ceph:v19, name=epic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:47 compute-0 podman[6527]: 2025-10-09 10:57:47.307050864 +0000 UTC m=+0.628363515 container died 8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44 (image=quay.io/ceph/ceph:v19, name=peaceful_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 10:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f997a94ed7896c839752453ddf9168e21f3f608e03dd4749b7de19f9a0ad8fe8-merged.mount: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[6527]: 2025-10-09 10:57:47.3440729 +0000 UTC m=+0.665385531 container remove 8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44 (image=quay.io/ceph/ceph:v19, name=peaceful_poitras, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:47 compute-0 systemd[1]: libpod-conmon-8af906f1955bbbe19e6a9c370be55ebe04206b7ebda87b7d1483b8582e861d44.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 systemd[1]: libpod-conmon-9468be0babbd39e9be78465869546238ca985a61a00956a3608ecfb06f42f0f1.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[6628]: 2025-10-09 10:57:47.413647248 +0000 UTC m=+0.045912082 container create ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b (image=quay.io/ceph/ceph:v19, name=jolly_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 10:57:47 compute-0 systemd[1]: Started libpod-conmon-ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b.scope.
Oct  9 10:57:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f2c177c4034e016e587b4dccc2d426b94e5e703ccc29827a83a7a2d019daac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f2c177c4034e016e587b4dccc2d426b94e5e703ccc29827a83a7a2d019daac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f2c177c4034e016e587b4dccc2d426b94e5e703ccc29827a83a7a2d019daac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 podman[6628]: 2025-10-09 10:57:47.482234555 +0000 UTC m=+0.114499409 container init ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b (image=quay.io/ceph/ceph:v19, name=jolly_blackburn, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:57:47 compute-0 podman[6628]: 2025-10-09 10:57:47.391335543 +0000 UTC m=+0.023600387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:47 compute-0 podman[6628]: 2025-10-09 10:57:47.490158919 +0000 UTC m=+0.122423753 container start ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b (image=quay.io/ceph/ceph:v19, name=jolly_blackburn, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 10:57:47 compute-0 chronyd[853]: Selected source 148.113.192.80 (pool.ntp.org)
Oct  9 10:57:46 compute-0 chronyd[853]: System clock wrong by -1.248431 seconds
Oct  9 10:57:46 compute-0 systemd-journald[690]: Time jumped backwards, rotating.
Oct  9 10:57:46 compute-0 podman[6628]: 2025-10-09 10:57:46.24736425 +0000 UTC m=+0.128059683 container attach ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b (image=quay.io/ceph/ceph:v19, name=jolly_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 10:57:46 compute-0 chronyd[853]: System clock was stepped by -1.248431 seconds
Oct  9 10:57:46 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:57:46 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:57:46 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:57:46 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:57:46 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:57:46 compute-0 rsyslogd[1315]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:57:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service crash spec with placement *
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  9 10:57:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:57:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 jolly_blackburn[6672]: Scheduled crash update...
Oct  9 10:57:46 compute-0 systemd[1]: libpod-ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b.scope: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6628]: 2025-10-09 10:57:46.635306715 +0000 UTC m=+0.516002128 container died ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b (image=quay.io/ceph/ceph:v19, name=jolly_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:46 compute-0 ceph-mon[4705]: Saving service mon spec with placement count:5
Oct  9 10:57:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f2c177c4034e016e587b4dccc2d426b94e5e703ccc29827a83a7a2d019daac-merged.mount: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6628]: 2025-10-09 10:57:46.669028765 +0000 UTC m=+0.549724178 container remove ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b (image=quay.io/ceph/ceph:v19, name=jolly_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:57:46 compute-0 systemd[1]: libpod-conmon-ebf5f2138779b4662170409a0c45a60cee952aed8a1c1cb8777792e78baa452b.scope: Deactivated successfully.
Oct  9 10:57:46 compute-0 podman[6807]: 2025-10-09 10:57:46.747678664 +0000 UTC m=+0.050173288 container create 65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct  9 10:57:46 compute-0 systemd[1]: Started libpod-conmon-65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3.scope.
Oct  9 10:57:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a138396d042021450322138570ba3626b8a74cfca7d328d3f94ba5eecec92b28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a138396d042021450322138570ba3626b8a74cfca7d328d3f94ba5eecec92b28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a138396d042021450322138570ba3626b8a74cfca7d328d3f94ba5eecec92b28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:46 compute-0 podman[6807]: 2025-10-09 10:57:46.811393044 +0000 UTC m=+0.113887718 container init 65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3 (image=quay.io/ceph/ceph:v19, name=laughing_keller, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 10:57:46 compute-0 podman[6807]: 2025-10-09 10:57:46.817970525 +0000 UTC m=+0.120465169 container start 65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3 (image=quay.io/ceph/ceph:v19, name=laughing_keller, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:46 compute-0 podman[6807]: 2025-10-09 10:57:46.824389271 +0000 UTC m=+0.126883915 container attach 65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3 (image=quay.io/ceph/ceph:v19, name=laughing_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:57:46 compute-0 podman[6807]: 2025-10-09 10:57:46.732492737 +0000 UTC m=+0.034987361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:46 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:47 compute-0 podman[6915]: 2025-10-09 10:57:47.146624002 +0000 UTC m=+0.051991507 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2322917026' entity='client.admin' 
Oct  9 10:57:47 compute-0 systemd[1]: libpod-65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[6807]: 2025-10-09 10:57:47.202272414 +0000 UTC m=+0.504767058 container died 65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a138396d042021450322138570ba3626b8a74cfca7d328d3f94ba5eecec92b28-merged.mount: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[6807]: 2025-10-09 10:57:47.24496999 +0000 UTC m=+0.547464624 container remove 65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:47 compute-0 podman[6915]: 2025-10-09 10:57:47.250398974 +0000 UTC m=+0.155766489 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:47 compute-0 systemd[1]: libpod-conmon-65942884a4b1fb8061fe09433b8e1afb84bf77c9c61a02553795b514f12897e3.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.316966087 +0000 UTC m=+0.046041236 container create 5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4 (image=quay.io/ceph/ceph:v19, name=amazing_tesla, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct  9 10:57:47 compute-0 systemd[1]: Started libpod-conmon-5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4.scope.
Oct  9 10:57:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b03281362209865c2ab1ac46035ec6b80d4f60f37d56b61f584e5280ecc75e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b03281362209865c2ab1ac46035ec6b80d4f60f37d56b61f584e5280ecc75e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b03281362209865c2ab1ac46035ec6b80d4f60f37d56b61f584e5280ecc75e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.297568996 +0000 UTC m=+0.026644175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.394229311 +0000 UTC m=+0.123304480 container init 5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4 (image=quay.io/ceph/ceph:v19, name=amazing_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.399861262 +0000 UTC m=+0.128936411 container start 5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4 (image=quay.io/ceph/ceph:v19, name=amazing_tesla, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.40480889 +0000 UTC m=+0.133884039 container attach 5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4 (image=quay.io/ceph/ceph:v19, name=amazing_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:57:47 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct  9 10:57:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:47 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 7079 (sysctl)
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.772795306 +0000 UTC m=+0.501870445 container died 5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4 (image=quay.io/ceph/ceph:v19, name=amazing_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 10:57:47 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  9 10:57:47 compute-0 systemd[1]: libpod-5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b03281362209865c2ab1ac46035ec6b80d4f60f37d56b61f584e5280ecc75e8-merged.mount: Deactivated successfully.
Oct  9 10:57:47 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  9 10:57:47 compute-0 podman[6949]: 2025-10-09 10:57:47.819335186 +0000 UTC m=+0.548410325 container remove 5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4 (image=quay.io/ceph/ceph:v19, name=amazing_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:57:47 compute-0 systemd[1]: libpod-conmon-5c349a18527e7fe650d7ef1145a0fe1810c3f09d44711dbd964714aa7cea6fb4.scope: Deactivated successfully.
Oct  9 10:57:47 compute-0 podman[7096]: 2025-10-09 10:57:47.873557673 +0000 UTC m=+0.034390433 container create 8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9 (image=quay.io/ceph/ceph:v19, name=friendly_pike, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:47 compute-0 systemd[1]: Started libpod-conmon-8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9.scope.
Oct  9 10:57:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8057705d4b60785959041f2989fb1527d5df06c654064a4e42bcee372e01bd02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8057705d4b60785959041f2989fb1527d5df06c654064a4e42bcee372e01bd02/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8057705d4b60785959041f2989fb1527d5df06c654064a4e42bcee372e01bd02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:47 compute-0 podman[7096]: 2025-10-09 10:57:47.940729955 +0000 UTC m=+0.101562735 container init 8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9 (image=quay.io/ceph/ceph:v19, name=friendly_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 10:57:47 compute-0 podman[7096]: 2025-10-09 10:57:47.949492665 +0000 UTC m=+0.110325425 container start 8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9 (image=quay.io/ceph/ceph:v19, name=friendly_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:57:47 compute-0 podman[7096]: 2025-10-09 10:57:47.955285161 +0000 UTC m=+0.116117951 container attach 8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9 (image=quay.io/ceph/ceph:v19, name=friendly_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  9 10:57:47 compute-0 podman[7096]: 2025-10-09 10:57:47.859192193 +0000 UTC m=+0.020024973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:48 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2322917026' entity='client.admin' 
Oct  9 10:57:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:48 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:57:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:48 compute-0 ceph-mgr[4997]: [cephadm INFO root] Added label _admin to host compute-0
Oct  9 10:57:48 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  9 10:57:48 compute-0 friendly_pike[7115]: Added label _admin to host compute-0
Oct  9 10:57:48 compute-0 systemd[1]: libpod-8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9.scope: Deactivated successfully.
Oct  9 10:57:48 compute-0 podman[7096]: 2025-10-09 10:57:48.317588704 +0000 UTC m=+0.478421464 container died 8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9 (image=quay.io/ceph/ceph:v19, name=friendly_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8057705d4b60785959041f2989fb1527d5df06c654064a4e42bcee372e01bd02-merged.mount: Deactivated successfully.
Oct  9 10:57:48 compute-0 podman[7096]: 2025-10-09 10:57:48.352855124 +0000 UTC m=+0.513687884 container remove 8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9 (image=quay.io/ceph/ceph:v19, name=friendly_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:48 compute-0 systemd[1]: libpod-conmon-8cec9cf176b86d65f3c06232e8adddd95a8f9a2d9f27e2388059daebc278c6b9.scope: Deactivated successfully.
Oct  9 10:57:48 compute-0 podman[7219]: 2025-10-09 10:57:48.417412992 +0000 UTC m=+0.044886399 container create df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057 (image=quay.io/ceph/ceph:v19, name=kind_elion, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:57:48 compute-0 systemd[1]: Started libpod-conmon-df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057.scope.
Oct  9 10:57:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f55da1532db331bc33fdc4caa5ab6100ce45f9e58481e749026683671529940/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f55da1532db331bc33fdc4caa5ab6100ce45f9e58481e749026683671529940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f55da1532db331bc33fdc4caa5ab6100ce45f9e58481e749026683671529940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:48 compute-0 podman[7219]: 2025-10-09 10:57:48.396431939 +0000 UTC m=+0.023905356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:48 compute-0 podman[7219]: 2025-10-09 10:57:48.497170306 +0000 UTC m=+0.124643743 container init df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057 (image=quay.io/ceph/ceph:v19, name=kind_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:48 compute-0 podman[7219]: 2025-10-09 10:57:48.502125594 +0000 UTC m=+0.129599001 container start df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057 (image=quay.io/ceph/ceph:v19, name=kind_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:57:48 compute-0 podman[7219]: 2025-10-09 10:57:48.507555298 +0000 UTC m=+0.135028705 container attach df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057 (image=quay.io/ceph/ceph:v19, name=kind_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 10:57:48 compute-0 podman[7364]: 2025-10-09 10:57:48.911359361 +0000 UTC m=+0.033410921 container create de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:48 compute-0 systemd[1]: Started libpod-conmon-de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf.scope.
Oct  9 10:57:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct  9 10:57:48 compute-0 podman[7364]: 2025-10-09 10:57:48.977884471 +0000 UTC m=+0.099936051 container init de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/915407978' entity='client.admin' 
Oct  9 10:57:48 compute-0 kind_elion[7249]: set mgr/dashboard/cluster/status
Oct  9 10:57:48 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:48 compute-0 podman[7364]: 2025-10-09 10:57:48.984303317 +0000 UTC m=+0.106354877 container start de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 10:57:48 compute-0 vibrant_sinoussi[7378]: 167 167
Oct  9 10:57:48 compute-0 systemd[1]: libpod-de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf.scope: Deactivated successfully.
Oct  9 10:57:48 compute-0 podman[7364]: 2025-10-09 10:57:48.987624544 +0000 UTC m=+0.109676124 container attach de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 10:57:48 compute-0 podman[7364]: 2025-10-09 10:57:48.988505052 +0000 UTC m=+0.110556642 container died de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 10:57:48 compute-0 podman[7364]: 2025-10-09 10:57:48.897143496 +0000 UTC m=+0.019195056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:57:49 compute-0 systemd[1]: libpod-df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057.scope: Deactivated successfully.
Oct  9 10:57:49 compute-0 podman[7219]: 2025-10-09 10:57:49.005202237 +0000 UTC m=+0.632675654 container died df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057 (image=quay.io/ceph/ceph:v19, name=kind_elion, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-84a1b20418ee0eacc96f8ae495f097422aaa3237b3de0d71d906fab060e55496-merged.mount: Deactivated successfully.
Oct  9 10:57:49 compute-0 podman[7364]: 2025-10-09 10:57:49.049442854 +0000 UTC m=+0.171494404 container remove de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 10:57:49 compute-0 systemd[1]: libpod-conmon-de654e81cdedb80221e7d93c098550cbdc3923b2d847e11bf839e58c7e279fdf.scope: Deactivated successfully.
Oct  9 10:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f55da1532db331bc33fdc4caa5ab6100ce45f9e58481e749026683671529940-merged.mount: Deactivated successfully.
Oct  9 10:57:49 compute-0 podman[7219]: 2025-10-09 10:57:49.080464237 +0000 UTC m=+0.707937644 container remove df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057 (image=quay.io/ceph/ceph:v19, name=kind_elion, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:57:49 compute-0 systemd[1]: libpod-conmon-df6f2eafc5b99d19832422e283b9c10ca789ecffc31d5af3dbe8fa2b9eccf057.scope: Deactivated successfully.
Oct  9 10:57:49 compute-0 podman[7417]: 2025-10-09 10:57:49.298149239 +0000 UTC m=+0.097188803 container create 1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:57:49 compute-0 podman[7417]: 2025-10-09 10:57:49.223188589 +0000 UTC m=+0.022228173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:57:49 compute-0 ceph-mon[4705]: Saving service crash spec with placement *
Oct  9 10:57:49 compute-0 ceph-mon[4705]: Saving service mgr spec with placement count:2
Oct  9 10:57:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:49 compute-0 ceph-mon[4705]: Added label _admin to host compute-0
Oct  9 10:57:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:49 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/915407978' entity='client.admin' 
Oct  9 10:57:49 compute-0 systemd[1]: Started libpod-conmon-1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2.scope.
Oct  9 10:57:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f92eeae8a644551d5bff001901571c635165460252666e54142d3cb47ecdc74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f92eeae8a644551d5bff001901571c635165460252666e54142d3cb47ecdc74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f92eeae8a644551d5bff001901571c635165460252666e54142d3cb47ecdc74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f92eeae8a644551d5bff001901571c635165460252666e54142d3cb47ecdc74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:49 compute-0 podman[7417]: 2025-10-09 10:57:49.459406083 +0000 UTC m=+0.258445677 container init 1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_shaw, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:49 compute-0 podman[7417]: 2025-10-09 10:57:49.46895576 +0000 UTC m=+0.267995324 container start 1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_shaw, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:49 compute-0 podman[7417]: 2025-10-09 10:57:49.511596946 +0000 UTC m=+0.310636510 container attach 1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:57:49 compute-0 python3[7462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:49 compute-0 podman[7464]: 2025-10-09 10:57:49.685101433 +0000 UTC m=+0.059988263 container create 31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e (image=quay.io/ceph/ceph:v19, name=clever_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct  9 10:57:49 compute-0 podman[7464]: 2025-10-09 10:57:49.650320268 +0000 UTC m=+0.025207118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:49 compute-0 systemd[1]: Started libpod-conmon-31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e.scope.
Oct  9 10:57:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6db283631bb4fb753a0d1b7589c0bc9ba173ff2eab7b573b254ea3f39f6fddd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6db283631bb4fb753a0d1b7589c0bc9ba173ff2eab7b573b254ea3f39f6fddd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:49 compute-0 podman[7464]: 2025-10-09 10:57:49.868216857 +0000 UTC m=+0.243103717 container init 31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e (image=quay.io/ceph/ceph:v19, name=clever_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 10:57:49 compute-0 podman[7464]: 2025-10-09 10:57:49.878000621 +0000 UTC m=+0.252887451 container start 31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e (image=quay.io/ceph/ceph:v19, name=clever_goodall, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:57:49 compute-0 podman[7464]: 2025-10-09 10:57:49.916452102 +0000 UTC m=+0.291338932 container attach 31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e (image=quay.io/ceph/ceph:v19, name=clever_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 10:57:50 compute-0 crazy_shaw[7433]: [
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:    {
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "available": false,
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "being_replaced": false,
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "ceph_device_lvm": false,
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "lsm_data": {},
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "lvs": [],
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "path": "/dev/sr0",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "rejected_reasons": [
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "Has a FileSystem",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "Insufficient space (<5GB)"
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        ],
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        "sys_api": {
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "actuators": null,
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "device_nodes": [
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:                "sr0"
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            ],
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "devname": "sr0",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "human_readable_size": "482.00 KB",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "id_bus": "ata",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "model": "QEMU DVD-ROM",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "nr_requests": "2",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "parent": "/dev/sr0",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "partitions": {},
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "path": "/dev/sr0",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "removable": "1",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "rev": "2.5+",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "ro": "0",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "rotational": "0",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "sas_address": "",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "sas_device_handle": "",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "scheduler_mode": "mq-deadline",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "sectors": 0,
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "sectorsize": "2048",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "size": 493568.0,
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "support_discard": "2048",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "type": "disk",
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:            "vendor": "QEMU"
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:        }
Oct  9 10:57:50 compute-0 crazy_shaw[7433]:    }
Oct  9 10:57:50 compute-0 crazy_shaw[7433]: ]
Oct  9 10:57:50 compute-0 systemd[1]: libpod-1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2.scope: Deactivated successfully.
Oct  9 10:57:50 compute-0 podman[7417]: 2025-10-09 10:57:50.170427456 +0000 UTC m=+0.969467020 container died 1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_shaw, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 10:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f92eeae8a644551d5bff001901571c635165460252666e54142d3cb47ecdc74-merged.mount: Deactivated successfully.
Oct  9 10:57:50 compute-0 podman[7417]: 2025-10-09 10:57:50.217262917 +0000 UTC m=+1.016302481 container remove 1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_shaw, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:50 compute-0 systemd[1]: libpod-conmon-1e58f6ddd16f27862278b81f74dbd36b289f06a52119c223093b8b10d8e931e2.scope: Deactivated successfully.
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2870451243' entity='client.admin' 
Oct  9 10:57:50 compute-0 systemd[1]: libpod-31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e.scope: Deactivated successfully.
Oct  9 10:57:50 compute-0 conmon[7485]: conmon 31eac7fdc7a6920b2a34 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e.scope/container/memory.events
Oct  9 10:57:50 compute-0 podman[7464]: 2025-10-09 10:57:50.262227727 +0000 UTC m=+0.637114557 container died 31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e (image=quay.io/ceph/ceph:v19, name=clever_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6db283631bb4fb753a0d1b7589c0bc9ba173ff2eab7b573b254ea3f39f6fddd-merged.mount: Deactivated successfully.
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:57:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:50 compute-0 podman[7464]: 2025-10-09 10:57:50.308659563 +0000 UTC m=+0.683546393 container remove 31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e (image=quay.io/ceph/ceph:v19, name=clever_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:57:50 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 10:57:50 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 10:57:50 compute-0 systemd[1]: libpod-conmon-31eac7fdc7a6920b2a34ab1255ece747901a8b39913fd295831181fef2246e2e.scope: Deactivated successfully.
Oct  9 10:57:50 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:57:50 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:57:50 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2870451243' entity='client.admin' 
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:57:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:51 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 10:57:51 compute-0 ansible-async_wrapper.py[8949]: Invoked with j351312200559 30 /home/zuul/.ansible/tmp/ansible-tmp-1760007470.6581836-33704-1070937506328/AnsiballZ_command.py _
Oct  9 10:57:51 compute-0 ansible-async_wrapper.py[9008]: Starting module and watcher
Oct  9 10:57:51 compute-0 ansible-async_wrapper.py[9008]: Start watching 9011 (30)
Oct  9 10:57:51 compute-0 ansible-async_wrapper.py[9011]: Start module (9011)
Oct  9 10:57:51 compute-0 ansible-async_wrapper.py[8949]: Return async_wrapper task started.
Oct  9 10:57:51 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:57:51 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:57:51 compute-0 python3[9017]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.451124674 +0000 UTC m=+0.040453977 container create e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b (image=quay.io/ceph/ceph:v19, name=recursing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:51 compute-0 systemd[1]: Started libpod-conmon-e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b.scope.
Oct  9 10:57:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f07ccfb89ee208c8ef80babd467be7c430f0fb491b2a87c25ea22eea7bbea6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f07ccfb89ee208c8ef80babd467be7c430f0fb491b2a87c25ea22eea7bbea6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.432641132 +0000 UTC m=+0.021970445 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.529663309 +0000 UTC m=+0.118992612 container init e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b (image=quay.io/ceph/ceph:v19, name=recursing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.537234611 +0000 UTC m=+0.126563904 container start e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b (image=quay.io/ceph/ceph:v19, name=recursing_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.540188226 +0000 UTC m=+0.129517529 container attach e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b (image=quay.io/ceph/ceph:v19, name=recursing_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Oct  9 10:57:51 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:57:51 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:57:51 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 10:57:51 compute-0 recursing_kare[9143]: 
Oct  9 10:57:51 compute-0 recursing_kare[9143]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 10:57:51 compute-0 systemd[1]: libpod-e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b.scope: Deactivated successfully.
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.89533532 +0000 UTC m=+0.484664643 container died e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b (image=quay.io/ceph/ceph:v19, name=recursing_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 10:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f07ccfb89ee208c8ef80babd467be7c430f0fb491b2a87c25ea22eea7bbea6-merged.mount: Deactivated successfully.
Oct  9 10:57:51 compute-0 podman[9080]: 2025-10-09 10:57:51.931367714 +0000 UTC m=+0.520697037 container remove e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b (image=quay.io/ceph/ceph:v19, name=recursing_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:57:51 compute-0 systemd[1]: libpod-conmon-e2e4ac90a59577b8ffb98101cf1e80ad12d653645e42fc685e7074ef61b79e5b.scope: Deactivated successfully.
Oct  9 10:57:51 compute-0 ansible-async_wrapper.py[9011]: Module complete (9011)
Oct  9 10:57:52 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:57:52 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:57:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:57:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:52 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev e96d2ca5-b429-4aaa-a2fc-94bf5af97d43 (Updating crash deployment (+1 -> 1))
Oct  9 10:57:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 10:57:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 10:57:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 10:57:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:52 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:52 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  9 10:57:52 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  9 10:57:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:57:52 compute-0 python3[9650]: ansible-ansible.legacy.async_status Invoked with jid=j351312200559.8949 mode=status _async_dir=/root/.ansible_async
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.818757055 +0000 UTC m=+0.034978641 container create de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:52 compute-0 systemd[1]: Started libpod-conmon-de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd.scope.
Oct  9 10:57:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.872257249 +0000 UTC m=+0.088479365 container init de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.877424254 +0000 UTC m=+0.093645840 container start de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:52 compute-0 charming_cartwright[9753]: 167 167
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.881508665 +0000 UTC m=+0.097730251 container attach de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:52 compute-0 systemd[1]: libpod-de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd.scope: Deactivated successfully.
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.881985061 +0000 UTC m=+0.098206657 container died de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.804127867 +0000 UTC m=+0.020349473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7249ffee02e10ede043014603c14e835aec1c2a20c9f5315ef93da1ae0847ca2-merged.mount: Deactivated successfully.
Oct  9 10:57:52 compute-0 podman[9736]: 2025-10-09 10:57:52.913586373 +0000 UTC m=+0.129807959 container remove de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cartwright, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 10:57:52 compute-0 systemd[1]: libpod-conmon-de2684543b523a47233f043d9dbae5a855d3c7afce3e730ae96baa1902b02dcd.scope: Deactivated successfully.
Oct  9 10:57:52 compute-0 python3[9745]: ansible-ansible.legacy.async_status Invoked with jid=j351312200559.8949 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 10:57:52 compute-0 systemd[1]: Reloading.
Oct  9 10:57:52 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:53 compute-0 systemd[1]: Reloading.
Oct  9 10:57:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:57:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:57:53 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:57:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 10:57:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 10:57:53 compute-0 ceph-mon[4705]: Deploying daemon crash.compute-0 on compute-0
Oct  9 10:57:53 compute-0 systemd[1]: Starting Ceph crash.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:57:53 compute-0 python3[9878]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 10:57:53 compute-0 podman[9927]: 2025-10-09 10:57:53.713553534 +0000 UTC m=+0.044076533 container create 29c3c53cfe8b7aa5e72a0a6be9a3c5c717c892e290bcd041263de1288023c1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12c41c11c8db4537f54a1cfe2e15e6284bc805c4d8a8ee6387aed7ed05278f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12c41c11c8db4537f54a1cfe2e15e6284bc805c4d8a8ee6387aed7ed05278f3/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12c41c11c8db4537f54a1cfe2e15e6284bc805c4d8a8ee6387aed7ed05278f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12c41c11c8db4537f54a1cfe2e15e6284bc805c4d8a8ee6387aed7ed05278f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:53 compute-0 podman[9927]: 2025-10-09 10:57:53.778594147 +0000 UTC m=+0.109117176 container init 29c3c53cfe8b7aa5e72a0a6be9a3c5c717c892e290bcd041263de1288023c1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:53 compute-0 podman[9927]: 2025-10-09 10:57:53.784062962 +0000 UTC m=+0.114585961 container start 29c3c53cfe8b7aa5e72a0a6be9a3c5c717c892e290bcd041263de1288023c1c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:53 compute-0 podman[9927]: 2025-10-09 10:57:53.692734557 +0000 UTC m=+0.023257576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:57:53 compute-0 bash[9927]: 29c3c53cfe8b7aa5e72a0a6be9a3c5c717c892e290bcd041263de1288023c1c3
Oct  9 10:57:53 compute-0 systemd[1]: Started Ceph crash.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  9 10:57:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:57:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev e96d2ca5-b429-4aaa-a2fc-94bf5af97d43 (Updating crash deployment (+1 -> 1))
Oct  9 10:57:53 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event e96d2ca5-b429-4aaa-a2fc-94bf5af97d43 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct  9 10:57:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:57:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 10:57:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: 2025-10-09T10:57:53.943+0000 7ff7cc1a8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: 2025-10-09T10:57:53.943+0000 7ff7cc1a8640 -1 AuthRegistry(0x7ff7c4069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: 2025-10-09T10:57:53.944+0000 7ff7cc1a8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: 2025-10-09T10:57:53.944+0000 7ff7cc1a8640 -1 AuthRegistry(0x7ff7cc1a6ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: 2025-10-09T10:57:53.945+0000 7ff7c9f1d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: 2025-10-09T10:57:53.945+0000 7ff7cc1a8640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  9 10:57:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-0[9942]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  9 10:57:54 compute-0 python3[9985]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.184311111 +0000 UTC m=+0.043923448 container create 3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587 (image=quay.io/ceph/ceph:v19, name=hungry_mahavira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:57:54 compute-0 systemd[1]: Started libpod-conmon-3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587.scope.
Oct  9 10:57:54 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c5fae2211d2415f435cc5ef9d43d5c3e492713cb045b9421b2aee2cb7ce8d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c5fae2211d2415f435cc5ef9d43d5c3e492713cb045b9421b2aee2cb7ce8d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c5fae2211d2415f435cc5ef9d43d5c3e492713cb045b9421b2aee2cb7ce8d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.257623299 +0000 UTC m=+0.117235646 container init 3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587 (image=quay.io/ceph/ceph:v19, name=hungry_mahavira, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.16305239 +0000 UTC m=+0.022664747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.265352807 +0000 UTC m=+0.124965144 container start 3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587 (image=quay.io/ceph/ceph:v19, name=hungry_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.283964743 +0000 UTC m=+0.143577110 container attach 3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587 (image=quay.io/ceph/ceph:v19, name=hungry_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 10:57:54 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 10:57:54 compute-0 hungry_mahavira[10076]: 
Oct  9 10:57:54 compute-0 hungry_mahavira[10076]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 10:57:54 compute-0 systemd[1]: libpod-3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587.scope: Deactivated successfully.
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.653583311 +0000 UTC m=+0.513195648 container died 3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587 (image=quay.io/ceph/ceph:v19, name=hungry_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:54 compute-0 podman[10173]: 2025-10-09 10:57:54.667076823 +0000 UTC m=+0.062710569 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6c5fae2211d2415f435cc5ef9d43d5c3e492713cb045b9421b2aee2cb7ce8d9-merged.mount: Deactivated successfully.
Oct  9 10:57:54 compute-0 podman[10058]: 2025-10-09 10:57:54.69510896 +0000 UTC m=+0.554721297 container remove 3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587 (image=quay.io/ceph/ceph:v19, name=hungry_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:54 compute-0 systemd[1]: libpod-conmon-3eb9367e1d5d22c61ca229bc46abd4c58086bcc4321d7b0c5d97fd801f996587.scope: Deactivated successfully.
Oct  9 10:57:54 compute-0 podman[10173]: 2025-10-09 10:57:54.764521173 +0000 UTC m=+0.160154889 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 10:57:54 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:54 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:54 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:54 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:54 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:54 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:54 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:55 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 1 completed events
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:57:55 compute-0 irqbalance[842]: Cannot change IRQ 34 affinity: Operation not permitted
Oct  9 10:57:55 compute-0 irqbalance[842]: IRQ 34 affinity is now unmanaged
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 python3[10282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct  9 10:57:55 compute-0 podman[10295]: 2025-10-09 10:57:55.246535291 +0000 UTC m=+0.064373553 container create 1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba (image=quay.io/ceph/ceph:v19, name=adoring_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:55 compute-0 podman[10295]: 2025-10-09 10:57:55.208354519 +0000 UTC m=+0.026192781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct  9 10:57:55 compute-0 systemd[1]: Started libpod-conmon-1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba.scope.
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct  9 10:57:55 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639bb93991f0c6fdaa92ff587b08d2dce213db99cb64a1ce4a41525888b39080/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639bb93991f0c6fdaa92ff587b08d2dce213db99cb64a1ce4a41525888b39080/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639bb93991f0c6fdaa92ff587b08d2dce213db99cb64a1ce4a41525888b39080/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:55 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  9 10:57:55 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:55 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 10:57:55 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 10:57:55 compute-0 podman[10295]: 2025-10-09 10:57:55.50689692 +0000 UTC m=+0.324735202 container init 1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba (image=quay.io/ceph/ceph:v19, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 10:57:55 compute-0 podman[10295]: 2025-10-09 10:57:55.512587462 +0000 UTC m=+0.330425734 container start 1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba (image=quay.io/ceph/ceph:v19, name=adoring_herschel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 10:57:55 compute-0 podman[10295]: 2025-10-09 10:57:55.532569292 +0000 UTC m=+0.350407574 container attach 1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba (image=quay.io/ceph/ceph:v19, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Oct  9 10:57:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3811625177' entity='client.admin' 
Oct  9 10:57:55 compute-0 systemd[1]: libpod-1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba.scope: Deactivated successfully.
Oct  9 10:57:55 compute-0 podman[10295]: 2025-10-09 10:57:55.867675014 +0000 UTC m=+0.685513276 container died 1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba (image=quay.io/ceph/ceph:v19, name=adoring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:57:55 compute-0 podman[10412]: 2025-10-09 10:57:55.866293851 +0000 UTC m=+0.023927988 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:55 compute-0 podman[10412]: 2025-10-09 10:57:55.972583785 +0000 UTC m=+0.130217902 container create 16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604 (image=quay.io/ceph/ceph:v19, name=gallant_kirch, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:57:56 compute-0 systemd[1]: Started libpod-conmon-16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604.scope.
Oct  9 10:57:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-639bb93991f0c6fdaa92ff587b08d2dce213db99cb64a1ce4a41525888b39080-merged.mount: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10295]: 2025-10-09 10:57:56.058529247 +0000 UTC m=+0.876367529 container remove 1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba (image=quay.io/ceph/ceph:v19, name=adoring_herschel, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:56 compute-0 podman[10412]: 2025-10-09 10:57:56.06296993 +0000 UTC m=+0.220604097 container init 16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604 (image=quay.io/ceph/ceph:v19, name=gallant_kirch, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:56 compute-0 systemd[1]: libpod-conmon-1f428add9771174ad9d6894ca4e93de2f75621a0f6e7c1451c77b0628cdb05ba.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10412]: 2025-10-09 10:57:56.070238312 +0000 UTC m=+0.227872429 container start 16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604 (image=quay.io/ceph/ceph:v19, name=gallant_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 10:57:56 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3811625177' entity='client.admin' 
Oct  9 10:57:56 compute-0 podman[10412]: 2025-10-09 10:57:56.074327504 +0000 UTC m=+0.231961621 container attach 16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604 (image=quay.io/ceph/ceph:v19, name=gallant_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 10:57:56 compute-0 gallant_kirch[10444]: 167 167
Oct  9 10:57:56 compute-0 podman[10412]: 2025-10-09 10:57:56.075106698 +0000 UTC m=+0.232740815 container died 16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604 (image=quay.io/ceph/ceph:v19, name=gallant_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:56 compute-0 systemd[1]: libpod-16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3cfa56d2b6b9cba8e94dc595a37396a7a1e1ef8fb11b5d7dbe3d8627830365b-merged.mount: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10412]: 2025-10-09 10:57:56.115159111 +0000 UTC m=+0.272793228 container remove 16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604 (image=quay.io/ceph/ceph:v19, name=gallant_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:56 compute-0 systemd[1]: libpod-conmon-16b2b1d5b641af5e4d507de270f39fb14f40848411260847161b0ee213b60604.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.izrudc (unknown last config time)...
Oct  9 10:57:56 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.izrudc (unknown last config time)...
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct  9 10:57:56 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct  9 10:57:56 compute-0 ansible-async_wrapper.py[9008]: Done in kid B.
Oct  9 10:57:56 compute-0 python3[10506]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.421170892 +0000 UTC m=+0.035935322 container create 1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259 (image=quay.io/ceph/ceph:v19, name=nervous_greider, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:56 compute-0 systemd[1]: Started libpod-conmon-1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259.scope.
Oct  9 10:57:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c580b2ea294f97f7015f55be345faa5a5e0ab1445b491662d5d1140e4532f63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c580b2ea294f97f7015f55be345faa5a5e0ab1445b491662d5d1140e4532f63/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c580b2ea294f97f7015f55be345faa5a5e0ab1445b491662d5d1140e4532f63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.483486177 +0000 UTC m=+0.098250647 container init 1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259 (image=quay.io/ceph/ceph:v19, name=nervous_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.490555314 +0000 UTC m=+0.105319754 container start 1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259 (image=quay.io/ceph/ceph:v19, name=nervous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.493970874 +0000 UTC m=+0.108735334 container attach 1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259 (image=quay.io/ceph/ceph:v19, name=nervous_greider, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.40424982 +0000 UTC m=+0.019014280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.572472358 +0000 UTC m=+0.037624716 container create 832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e (image=quay.io/ceph/ceph:v19, name=beautiful_euler, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:57:56 compute-0 systemd[1]: Started libpod-conmon-832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e.scope.
Oct  9 10:57:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.623998648 +0000 UTC m=+0.089151026 container init 832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e (image=quay.io/ceph/ceph:v19, name=beautiful_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.631155777 +0000 UTC m=+0.096308135 container start 832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e (image=quay.io/ceph/ceph:v19, name=beautiful_euler, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:56 compute-0 beautiful_euler[10605]: 167 167
Oct  9 10:57:56 compute-0 systemd[1]: libpod-832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.636399915 +0000 UTC m=+0.101552273 container attach 832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e (image=quay.io/ceph/ceph:v19, name=beautiful_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.636769087 +0000 UTC m=+0.101921445 container died 832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e (image=quay.io/ceph/ceph:v19, name=beautiful_euler, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.556084122 +0000 UTC m=+0.021236500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-26c29ac19914be0930ebe11c1d656c016b00ada00337a084dc5955f9c9877046-merged.mount: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10570]: 2025-10-09 10:57:56.670671683 +0000 UTC m=+0.135824041 container remove 832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e (image=quay.io/ceph/ceph:v19, name=beautiful_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 10:57:56 compute-0 systemd[1]: libpod-conmon-832106e52eaa33f90972c5510d43c11348cc6a4c6b7c04870d454ded80c6b62e.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1147191626' entity='client.admin' 
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:57:56 compute-0 systemd[1]: libpod-1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.867464945 +0000 UTC m=+0.482229395 container died 1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259 (image=quay.io/ceph/ceph:v19, name=nervous_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c580b2ea294f97f7015f55be345faa5a5e0ab1445b491662d5d1140e4532f63-merged.mount: Deactivated successfully.
Oct  9 10:57:56 compute-0 podman[10536]: 2025-10-09 10:57:56.934194692 +0000 UTC m=+0.548959132 container remove 1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259 (image=quay.io/ceph/ceph:v19, name=nervous_greider, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:57:56 compute-0 systemd[1]: libpod-conmon-1cd89f4617d5e50ff5ebb35e4231e81fb3bb4ae4de940f89de75f194f71a1259.scope: Deactivated successfully.
Oct  9 10:57:56 compute-0 ceph-mgr[4997]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  9 10:57:56 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:57:56 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: Reconfiguring mgr.compute-0.izrudc (unknown last config time)...
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 10:57:57 compute-0 ceph-mon[4705]: Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1147191626' entity='client.admin' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:57 compute-0 ceph-mon[4705]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  9 10:57:57 compute-0 python3[10711]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:57 compute-0 podman[10712]: 2025-10-09 10:57:57.365035742 +0000 UTC m=+0.039736524 container create dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec (image=quay.io/ceph/ceph:v19, name=busy_goldstine, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 10:57:57 compute-0 systemd[1]: Started libpod-conmon-dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec.scope.
Oct  9 10:57:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ced3cd212d7ac900f396c85966dd5806805131cf10810a91c89f4c5292bf2d5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ced3cd212d7ac900f396c85966dd5806805131cf10810a91c89f4c5292bf2d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ced3cd212d7ac900f396c85966dd5806805131cf10810a91c89f4c5292bf2d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:57:57 compute-0 podman[10712]: 2025-10-09 10:57:57.347931794 +0000 UTC m=+0.022632576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:57 compute-0 podman[10712]: 2025-10-09 10:57:57.477849464 +0000 UTC m=+0.152550246 container init dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec (image=quay.io/ceph/ceph:v19, name=busy_goldstine, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:57:57 compute-0 podman[10712]: 2025-10-09 10:57:57.485068646 +0000 UTC m=+0.159769428 container start dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec (image=quay.io/ceph/ceph:v19, name=busy_goldstine, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 10:57:57 compute-0 podman[10712]: 2025-10-09 10:57:57.507773903 +0000 UTC m=+0.182474675 container attach dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec (image=quay.io/ceph/ceph:v19, name=busy_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:57:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Oct  9 10:57:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2202088609' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  9 10:57:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  9 10:57:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:57:58 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2202088609' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  9 10:57:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2202088609' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  9 10:57:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  9 10:57:58 compute-0 busy_goldstine[10727]: set require_min_compat_client to mimic
Oct  9 10:57:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  9 10:57:58 compute-0 systemd[1]: libpod-dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec.scope: Deactivated successfully.
Oct  9 10:57:58 compute-0 podman[10712]: 2025-10-09 10:57:58.304610843 +0000 UTC m=+0.979311625 container died dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec (image=quay.io/ceph/ceph:v19, name=busy_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ced3cd212d7ac900f396c85966dd5806805131cf10810a91c89f4c5292bf2d5-merged.mount: Deactivated successfully.
Oct  9 10:57:58 compute-0 podman[10712]: 2025-10-09 10:57:58.422466978 +0000 UTC m=+1.097167760 container remove dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec (image=quay.io/ceph/ceph:v19, name=busy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 10:57:58 compute-0 systemd[1]: libpod-conmon-dab15129195653d992baef3264d1b9f7a0267336e2d0c483e66edd3f982bb8ec.scope: Deactivated successfully.
Oct  9 10:57:58 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:57:59 compute-0 python3[10788]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:57:59 compute-0 podman[10789]: 2025-10-09 10:57:59.057191446 +0000 UTC m=+0.041030994 container create 3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d (image=quay.io/ceph/ceph:v19, name=exciting_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:57:59 compute-0 systemd[1]: Started libpod-conmon-3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d.scope.
Oct  9 10:57:59 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1aea5806cf55c29a96059699e602f4034b5e989f96ec9a83c555c5e55b12b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1aea5806cf55c29a96059699e602f4034b5e989f96ec9a83c555c5e55b12b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1aea5806cf55c29a96059699e602f4034b5e989f96ec9a83c555c5e55b12b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:57:59 compute-0 podman[10789]: 2025-10-09 10:57:59.122529809 +0000 UTC m=+0.106369387 container init 3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d (image=quay.io/ceph/ceph:v19, name=exciting_hopper, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:57:59 compute-0 podman[10789]: 2025-10-09 10:57:59.12817137 +0000 UTC m=+0.112010918 container start 3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d (image=quay.io/ceph/ceph:v19, name=exciting_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Oct  9 10:57:59 compute-0 podman[10789]: 2025-10-09 10:57:59.131387883 +0000 UTC m=+0.115227431 container attach 3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d (image=quay.io/ceph/ceph:v19, name=exciting_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:57:59 compute-0 podman[10789]: 2025-10-09 10:57:59.039077117 +0000 UTC m=+0.022916675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:57:59 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2202088609' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  9 10:57:59 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:57:59 compute-0 ceph-mgr[4997]: [cephadm INFO root] Added host compute-0
Oct  9 10:57:59 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:57:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:57:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:00 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:00 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:00 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:00 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:00 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:58:00 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:00 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:01 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct  9 10:58:01 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct  9 10:58:01 compute-0 ceph-mon[4705]: Added host compute-0
Oct  9 10:58:02 compute-0 ceph-mon[4705]: Deploying cephadm binary to compute-1
Oct  9 10:58:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:02 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:04 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:58:04 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:04 compute-0 ceph-mgr[4997]: [cephadm INFO root] Added host compute-1
Oct  9 10:58:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Added host compute-1
Oct  9 10:58:04 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:58:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:58:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:58:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:58:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:58:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:58:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:05 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:05 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:05 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:05 compute-0 ceph-mon[4705]: Added host compute-1
Oct  9 10:58:05 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:05 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:06 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct  9 10:58:06 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct  9 10:58:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:06 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:06 compute-0 ceph-mon[4705]: Deploying cephadm binary to compute-2
Oct  9 10:58:06 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:08 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 10:58:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: [cephadm INFO root] Added host compute-2
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Added host compute-2
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 10:58:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 10:58:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:09 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct  9 10:58:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:09 compute-0 exciting_hopper[10804]: Added host 'compute-0' with addr '192.168.122.100'
Oct  9 10:58:09 compute-0 exciting_hopper[10804]: Added host 'compute-1' with addr '192.168.122.101'
Oct  9 10:58:09 compute-0 exciting_hopper[10804]: Added host 'compute-2' with addr '192.168.122.102'
Oct  9 10:58:09 compute-0 exciting_hopper[10804]: Scheduled mon update...
Oct  9 10:58:09 compute-0 exciting_hopper[10804]: Scheduled mgr update...
Oct  9 10:58:09 compute-0 exciting_hopper[10804]: Scheduled osd.default_drive_group update...
Oct  9 10:58:09 compute-0 systemd[1]: libpod-3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d.scope: Deactivated successfully.
Oct  9 10:58:09 compute-0 podman[10789]: 2025-10-09 10:58:09.918423403 +0000 UTC m=+10.902262961 container died 3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d (image=quay.io/ceph/ceph:v19, name=exciting_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7be1aea5806cf55c29a96059699e602f4034b5e989f96ec9a83c555c5e55b12b-merged.mount: Deactivated successfully.
Oct  9 10:58:09 compute-0 podman[10789]: 2025-10-09 10:58:09.959307963 +0000 UTC m=+10.943147511 container remove 3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d (image=quay.io/ceph/ceph:v19, name=exciting_hopper, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:58:09 compute-0 systemd[1]: libpod-conmon-3a8ccb29f2b3f6873b98995fedc233ecbd96de450d0da4a73656a333d324553d.scope: Deactivated successfully.
Oct  9 10:58:10 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:10 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:10 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:10 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:10 compute-0 python3[10962]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.389017025 +0000 UTC m=+0.045230949 container create 7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546 (image=quay.io/ceph/ceph:v19, name=unruffled_yalow, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:58:10 compute-0 systemd[1]: Started libpod-conmon-7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546.scope.
Oct  9 10:58:10 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f71a6a9922d68b59f092b26398572da6a3c25c245935b4b16ba8686eb6342/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f71a6a9922d68b59f092b26398572da6a3c25c245935b4b16ba8686eb6342/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f71a6a9922d68b59f092b26398572da6a3c25c245935b4b16ba8686eb6342/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.454086279 +0000 UTC m=+0.110300233 container init 7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546 (image=quay.io/ceph/ceph:v19, name=unruffled_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.459335117 +0000 UTC m=+0.115549051 container start 7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546 (image=quay.io/ceph/ceph:v19, name=unruffled_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.366325639 +0000 UTC m=+0.022539603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.462808779 +0000 UTC m=+0.119022713 container attach 7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546 (image=quay.io/ceph/ceph:v19, name=unruffled_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 10:58:10 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 10:58:10 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991034133' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 10:58:10 compute-0 unruffled_yalow[10980]: 
Oct  9 10:58:10 compute-0 unruffled_yalow[10980]: {"fsid":"e990987d-9393-5e96-99ae-9e3a3319f191","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":53,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-09T10:57:16:742582+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-09T10:57:16.745535+0000","services":{}},"progress_events":{}}
Oct  9 10:58:10 compute-0 systemd[1]: libpod-7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546.scope: Deactivated successfully.
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.891995785 +0000 UTC m=+0.548209719 container died 7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546 (image=quay.io/ceph/ceph:v19, name=unruffled_yalow, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 10:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-321f71a6a9922d68b59f092b26398572da6a3c25c245935b4b16ba8686eb6342-merged.mount: Deactivated successfully.
Oct  9 10:58:10 compute-0 podman[10964]: 2025-10-09 10:58:10.924551127 +0000 UTC m=+0.580765061 container remove 7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546 (image=quay.io/ceph/ceph:v19, name=unruffled_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 10:58:10 compute-0 systemd[1]: libpod-conmon-7b9cfa3edd98c2211f97bc249fc868fee4fc6242b875b23b1eb708e4758c5546.scope: Deactivated successfully.
Oct  9 10:58:10 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:11 compute-0 ceph-mon[4705]: Added host compute-2
Oct  9 10:58:11 compute-0 ceph-mon[4705]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:11 compute-0 ceph-mon[4705]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:11 compute-0 ceph-mon[4705]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  9 10:58:11 compute-0 ceph-mon[4705]: Marking host: compute-1 for OSDSpec preview refresh.
Oct  9 10:58:11 compute-0 ceph-mon[4705]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  9 10:58:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:12 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:14 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:16 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:18 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:20 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:22 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:24 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:58:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:58:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:58:26 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 10:58:26 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 10:58:26 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:27 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:58:27 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:58:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:58:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:58:27 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 10:58:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:27 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:58:27 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:58:28 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:58:28 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:58:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:58:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:58:28.816+0000 7f573b1a4640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: service_name: mon
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: placement:
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  hosts:
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  - compute-0
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  - compute-1
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  - compute-2
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:58:28.817+0000 7f573b1a4640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: service_name: mgr
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: placement:
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  hosts:
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  - compute-0
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  - compute-1
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 93fe02ae-12be-4157-b3c3-0d7e7089e8dd (Updating crash deployment (+1 -> 2))
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  - compute-2
Oct  9 10:58:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 10:58:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 10:58:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 10:58:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 10:58:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:58:28 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct  9 10:58:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct  9 10:58:29 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:58:29 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:29 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:29 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:29 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 10:58:29 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 10:58:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 10:58:30 compute-0 ceph-mon[4705]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 10:58:30 compute-0 ceph-mon[4705]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 10:58:30 compute-0 ceph-mon[4705]: Deploying daemon crash.compute-1 on compute-1
Oct  9 10:58:30 compute-0 ceph-mon[4705]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 10:58:30 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:32 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:33 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 93fe02ae-12be-4157-b3c3-0d7e7089e8dd (Updating crash deployment (+1 -> 2))
Oct  9 10:58:33 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 93fe02ae-12be-4157-b3c3-0d7e7089e8dd (Updating crash deployment (+1 -> 2)) in 5 seconds
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:58:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:58:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.808302417 +0000 UTC m=+0.033009408 container create 2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:58:33 compute-0 systemd[1341]: Starting Mark boot as successful...
Oct  9 10:58:33 compute-0 systemd[1]: Started libpod-conmon-2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78.scope.
Oct  9 10:58:33 compute-0 systemd[1341]: Finished Mark boot as successful.
Oct  9 10:58:33 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.863071652 +0000 UTC m=+0.087778643 container init 2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.869358492 +0000 UTC m=+0.094065483 container start 2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bouman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:58:33 compute-0 busy_bouman[11123]: 167 167
Oct  9 10:58:33 compute-0 systemd[1]: libpod-2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78.scope: Deactivated successfully.
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.874362222 +0000 UTC m=+0.099069263 container attach 2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.875729227 +0000 UTC m=+0.100436238 container died 2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.793282236 +0000 UTC m=+0.017989267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-01cbf89700d35580b67ca8a254f9245112960960ba39ee619e1f687d8d90463d-merged.mount: Deactivated successfully.
Oct  9 10:58:33 compute-0 podman[11104]: 2025-10-09 10:58:33.910745579 +0000 UTC m=+0.135452570 container remove 2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_bouman, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 10:58:33 compute-0 systemd[1]: libpod-conmon-2b95fa832a87bed887cf01761a473df10afdb96690bc1ea9724018ab515e8c78.scope: Deactivated successfully.
Oct  9 10:58:34 compute-0 podman[11148]: 2025-10-09 10:58:34.088791001 +0000 UTC m=+0.046816390 container create f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 10:58:34 compute-0 systemd[1]: Started libpod-conmon-f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5.scope.
Oct  9 10:58:34 compute-0 podman[11148]: 2025-10-09 10:58:34.068461569 +0000 UTC m=+0.026486978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91158d0c855a53889b7a5ce67c3dea9a110a8f1aa9680260c3b8cb0371e62ffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91158d0c855a53889b7a5ce67c3dea9a110a8f1aa9680260c3b8cb0371e62ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91158d0c855a53889b7a5ce67c3dea9a110a8f1aa9680260c3b8cb0371e62ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91158d0c855a53889b7a5ce67c3dea9a110a8f1aa9680260c3b8cb0371e62ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91158d0c855a53889b7a5ce67c3dea9a110a8f1aa9680260c3b8cb0371e62ffc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:34 compute-0 podman[11148]: 2025-10-09 10:58:34.226938285 +0000 UTC m=+0.184963694 container init f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 10:58:34 compute-0 podman[11148]: 2025-10-09 10:58:34.234996264 +0000 UTC m=+0.193021653 container start f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:34 compute-0 podman[11148]: 2025-10-09 10:58:34.26735649 +0000 UTC m=+0.225381879 container attach f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_dubinsky, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 10:58:34 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:34 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:34 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:34 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:34 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:58:34 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:58:34 compute-0 exciting_dubinsky[11165]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:58:34 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:34 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:34 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0ea02d81-16d9-4b32-9888-cc7ebc83243e
Oct  9 10:58:34 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:34 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_10:58:34
Oct  9 10:58:34 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:58:34 compute-0 ceph-mgr[4997]: [balancer INFO root] do_upmap
Oct  9 10:58:34 compute-0 ceph-mgr[4997]: [balancer INFO root] No pools available
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 2 completed events
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4115323594' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3173225471' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4115323594' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"}]': finished
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3173225471' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"}]': finished
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:35 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 10:58:35 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/4115323594' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mon[4705]: from='client.? 192.168.122.101:0/3173225471' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:35 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/4115323594' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"}]': finished
Oct  9 10:58:35 compute-0 ceph-mon[4705]: from='client.? 192.168.122.101:0/3173225471' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"}]': finished
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:35 compute-0 lvm[11226]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:58:35 compute-0 lvm[11226]: VG ceph_vg0 finished
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2749783588' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  9 10:58:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  9 10:58:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155858687' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: stderr: got monmap epoch 1
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: --> Creating keyring file for osd.0
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  9 10:58:35 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 0ea02d81-16d9-4b32-9888-cc7ebc83243e --setuser ceph --setgroup ceph
Oct  9 10:58:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:58:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:58:36 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:37 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  9 10:58:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:37 compute-0 ceph-mon[4705]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  9 10:58:38 compute-0 exciting_dubinsky[11165]: stderr: 2025-10-09T10:58:35.894+0000 7faf6931c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Oct  9 10:58:38 compute-0 exciting_dubinsky[11165]: stderr: 2025-10-09T10:58:36.157+0000 7faf6931c740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  9 10:58:38 compute-0 exciting_dubinsky[11165]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  9 10:58:38 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 10:58:38 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  9 10:58:38 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:38 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:39 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:39 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 10:58:39 compute-0 exciting_dubinsky[11165]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 10:58:39 compute-0 exciting_dubinsky[11165]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  9 10:58:39 compute-0 exciting_dubinsky[11165]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  9 10:58:39 compute-0 systemd[1]: libpod-f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5.scope: Deactivated successfully.
Oct  9 10:58:39 compute-0 systemd[1]: libpod-f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5.scope: Consumed 2.021s CPU time.
Oct  9 10:58:39 compute-0 podman[11148]: 2025-10-09 10:58:39.05824332 +0000 UTC m=+5.016268729 container died f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_dubinsky, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-91158d0c855a53889b7a5ce67c3dea9a110a8f1aa9680260c3b8cb0371e62ffc-merged.mount: Deactivated successfully.
Oct  9 10:58:39 compute-0 podman[11148]: 2025-10-09 10:58:39.167791049 +0000 UTC m=+5.125816458 container remove f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:58:39 compute-0 systemd[1]: libpod-conmon-f6843c67d73caf851675a8b13f2da650b1a9385b901a62b66af8fbe763e575a5.scope: Deactivated successfully.
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.718722173 +0000 UTC m=+0.059859248 container create 79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jones, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:58:39 compute-0 systemd[1]: Started libpod-conmon-79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18.scope.
Oct  9 10:58:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.677878705 +0000 UTC m=+0.019015790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.800099939 +0000 UTC m=+0.141237044 container init 79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jones, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.807555398 +0000 UTC m=+0.148692473 container start 79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.810702129 +0000 UTC m=+0.151839204 container attach 79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:39 compute-0 competent_jones[12259]: 167 167
Oct  9 10:58:39 compute-0 systemd[1]: libpod-79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18.scope: Deactivated successfully.
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.814318245 +0000 UTC m=+0.155455320 container died 79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9909e3a6391b941c649aeb5d904821ffe1ba00e466a9d9d7d5baf7ed9d18378-merged.mount: Deactivated successfully.
Oct  9 10:58:39 compute-0 podman[12242]: 2025-10-09 10:58:39.853127728 +0000 UTC m=+0.194264803 container remove 79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:39 compute-0 systemd[1]: libpod-conmon-79d323560edf5f7103dae560c79136ef73b8312050f299e4a8a13ff56a14fd18.scope: Deactivated successfully.
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:40.0080585 +0000 UTC m=+0.043669700 container create 488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Oct  9 10:58:40 compute-0 systemd[1]: Started libpod-conmon-488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814.scope.
Oct  9 10:58:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a5aa77c8875f74ed58a36ae7de4508a8042b2c169679badd846f2e6221e3e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a5aa77c8875f74ed58a36ae7de4508a8042b2c169679badd846f2e6221e3e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a5aa77c8875f74ed58a36ae7de4508a8042b2c169679badd846f2e6221e3e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a5aa77c8875f74ed58a36ae7de4508a8042b2c169679badd846f2e6221e3e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:39.989459984 +0000 UTC m=+0.025071234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:40.091129381 +0000 UTC m=+0.126740581 container init 488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:40.098011371 +0000 UTC m=+0.133622571 container start 488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:40.105210192 +0000 UTC m=+0.140821412 container attach 488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]: {
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:    "0": [
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:        {
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "devices": [
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "/dev/loop3"
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            ],
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "lv_name": "ceph_lv0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "lv_size": "21470642176",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e990987d-9393-5e96-99ae-9e3a3319f191,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0ea02d81-16d9-4b32-9888-cc7ebc83243e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "lv_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "name": "ceph_lv0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "tags": {
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.block_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.cluster_fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.cluster_name": "ceph",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.crush_device_class": "",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.encrypted": "0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.osd_fsid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.osd_id": "0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.type": "block",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.vdo": "0",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:                "ceph.with_tpm": "0"
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            },
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "type": "block",
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:            "vg_name": "ceph_vg0"
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:        }
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]:    ]
Oct  9 10:58:40 compute-0 gallant_wilbur[12298]: }
Oct  9 10:58:40 compute-0 systemd[1]: libpod-488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814.scope: Deactivated successfully.
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:40.384894899 +0000 UTC m=+0.420506099 container died 488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a5aa77c8875f74ed58a36ae7de4508a8042b2c169679badd846f2e6221e3e8-merged.mount: Deactivated successfully.
Oct  9 10:58:40 compute-0 podman[12281]: 2025-10-09 10:58:40.465761339 +0000 UTC m=+0.501372539 container remove 488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 10:58:40 compute-0 systemd[1]: libpod-conmon-488860dfc587a44c31d17539c3f7a43ab43acdd032f6437973dd54f36098e814.scope: Deactivated successfully.
Oct  9 10:58:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct  9 10:58:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 10:58:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:58:40 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:58:40 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct  9 10:58:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct  9 10:58:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct  9 10:58:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 10:58:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:58:40 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:58:40 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Oct  9 10:58:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Oct  9 10:58:40 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.047730078 +0000 UTC m=+0.058516645 container create ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:41 compute-0 systemd[1]: Started libpod-conmon-ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d.scope.
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.00846143 +0000 UTC m=+0.019248017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.192746672 +0000 UTC m=+0.203533239 container init ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:41 compute-0 python3[12445]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.200499981 +0000 UTC m=+0.211286548 container start ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:58:41 compute-0 heuristic_swartz[12448]: 167 167
Oct  9 10:58:41 compute-0 systemd[1]: libpod-ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d.scope: Deactivated successfully.
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.240338146 +0000 UTC m=+0.251124713 container attach ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swartz, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.240639626 +0000 UTC m=+0.251426193 container died ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swartz, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-26bc0045dd3e95acde1c777e18fa3441403c5da4ff46415cd0ba697e4f800133-merged.mount: Deactivated successfully.
Oct  9 10:58:41 compute-0 podman[12408]: 2025-10-09 10:58:41.342049424 +0000 UTC m=+0.352836021 container remove ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:41 compute-0 systemd[1]: libpod-conmon-ab39f9daaa56862d94d900c09bf3b43634a0db742e01c902c4cf3e62d841c14d.scope: Deactivated successfully.
Oct  9 10:58:41 compute-0 podman[12455]: 2025-10-09 10:58:41.267192827 +0000 UTC m=+0.055609112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:58:41 compute-0 podman[12455]: 2025-10-09 10:58:41.373220593 +0000 UTC m=+0.161636898 container create e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580 (image=quay.io/ceph/ceph:v19, name=amazing_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:41 compute-0 systemd[1]: Started libpod-conmon-e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580.scope.
Oct  9 10:58:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7c6f34ef32238b2f7feb40df727e656242bf5e5126afb8609066660519b62f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7c6f34ef32238b2f7feb40df727e656242bf5e5126afb8609066660519b62f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7c6f34ef32238b2f7feb40df727e656242bf5e5126afb8609066660519b62f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 podman[12455]: 2025-10-09 10:58:41.437234543 +0000 UTC m=+0.225650838 container init e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580 (image=quay.io/ceph/ceph:v19, name=amazing_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:58:41 compute-0 podman[12455]: 2025-10-09 10:58:41.442483581 +0000 UTC m=+0.230899886 container start e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580 (image=quay.io/ceph/ceph:v19, name=amazing_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:41 compute-0 podman[12455]: 2025-10-09 10:58:41.452853973 +0000 UTC m=+0.241270278 container attach e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580 (image=quay.io/ceph/ceph:v19, name=amazing_lovelace, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:41 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 10:58:41 compute-0 ceph-mon[4705]: Deploying daemon osd.0 on compute-0
Oct  9 10:58:41 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 10:58:41 compute-0 ceph-mon[4705]: Deploying daemon osd.1 on compute-1
Oct  9 10:58:41 compute-0 podman[12503]: 2025-10-09 10:58:41.580238923 +0000 UTC m=+0.049505766 container create b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:58:41 compute-0 systemd[1]: Started libpod-conmon-b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c.scope.
Oct  9 10:58:41 compute-0 podman[12503]: 2025-10-09 10:58:41.551087949 +0000 UTC m=+0.020354852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928a6e58396c51cc749432c982344e3831df14dabc022fc7234cd6b35fa66bfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928a6e58396c51cc749432c982344e3831df14dabc022fc7234cd6b35fa66bfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928a6e58396c51cc749432c982344e3831df14dabc022fc7234cd6b35fa66bfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928a6e58396c51cc749432c982344e3831df14dabc022fc7234cd6b35fa66bfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928a6e58396c51cc749432c982344e3831df14dabc022fc7234cd6b35fa66bfd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:41 compute-0 podman[12503]: 2025-10-09 10:58:41.683561422 +0000 UTC m=+0.152828285 container init b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:41 compute-0 podman[12503]: 2025-10-09 10:58:41.69005194 +0000 UTC m=+0.159318783 container start b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:41 compute-0 podman[12503]: 2025-10-09 10:58:41.697598692 +0000 UTC m=+0.166865535 container attach b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:58:41 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test[12538]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct  9 10:58:41 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test[12538]:                            [--no-systemd] [--no-tmpfs]
Oct  9 10:58:41 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test[12538]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  9 10:58:41 compute-0 systemd[1]: libpod-b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c.scope: Deactivated successfully.
Oct  9 10:58:41 compute-0 podman[12503]: 2025-10-09 10:58:41.870857231 +0000 UTC m=+0.340124074 container died b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:58:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 10:58:41 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454562265' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 10:58:41 compute-0 amazing_lovelace[12488]: 
Oct  9 10:58:41 compute-0 amazing_lovelace[12488]: {"fsid":"e990987d-9393-5e96-99ae-9e3a3319f191","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":84,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1760007515,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-09T10:57:16:742582+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-09T10:58:38.830680+0000","services":{}},"progress_events":{}}
Oct  9 10:58:41 compute-0 systemd[1]: libpod-e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580.scope: Deactivated successfully.
Oct  9 10:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-928a6e58396c51cc749432c982344e3831df14dabc022fc7234cd6b35fa66bfd-merged.mount: Deactivated successfully.
Oct  9 10:58:42 compute-0 podman[12503]: 2025-10-09 10:58:42.043449829 +0000 UTC m=+0.512716672 container remove b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:42 compute-0 podman[12455]: 2025-10-09 10:58:42.050144193 +0000 UTC m=+0.838560488 container died e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580 (image=quay.io/ceph/ceph:v19, name=amazing_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:42 compute-0 systemd[1]: libpod-conmon-b31981e83f91a4bf8b28311a8e71a7d7fd69c030c2c3d051bcd8edf41efd152c.scope: Deactivated successfully.
Oct  9 10:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7c6f34ef32238b2f7feb40df727e656242bf5e5126afb8609066660519b62f-merged.mount: Deactivated successfully.
Oct  9 10:58:42 compute-0 podman[12455]: 2025-10-09 10:58:42.159772544 +0000 UTC m=+0.948188829 container remove e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580 (image=quay.io/ceph/ceph:v19, name=amazing_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:58:42 compute-0 systemd[1]: libpod-conmon-e6c3306afb14ab0f966a2cb885229dd5936c6676868e92c008602f138718d580.scope: Deactivated successfully.
Oct  9 10:58:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:42 compute-0 systemd[1]: Reloading.
Oct  9 10:58:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:58:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:58:42 compute-0 systemd[1]: Reloading.
Oct  9 10:58:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:58:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:58:42 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:42 compute-0 systemd[1]: Starting Ceph osd.0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:58:43 compute-0 podman[12711]: 2025-10-09 10:58:43.224170664 +0000 UTC m=+0.078838416 container create f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:43 compute-0 podman[12711]: 2025-10-09 10:58:43.167314263 +0000 UTC m=+0.021982035 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:43 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ac4bf6aca8c08ae694cb64f0263c1ede33a30816a409d8e24a3be695d6229/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ac4bf6aca8c08ae694cb64f0263c1ede33a30816a409d8e24a3be695d6229/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ac4bf6aca8c08ae694cb64f0263c1ede33a30816a409d8e24a3be695d6229/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ac4bf6aca8c08ae694cb64f0263c1ede33a30816a409d8e24a3be695d6229/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ac4bf6aca8c08ae694cb64f0263c1ede33a30816a409d8e24a3be695d6229/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:43 compute-0 podman[12711]: 2025-10-09 10:58:43.352895607 +0000 UTC m=+0.207563379 container init f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 10:58:43 compute-0 podman[12711]: 2025-10-09 10:58:43.358664972 +0000 UTC m=+0.213332724 container start f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:43 compute-0 podman[12711]: 2025-10-09 10:58:43.400492482 +0000 UTC m=+0.255160234 container attach f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:43 compute-0 bash[12711]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:43 compute-0 bash[12711]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:44 compute-0 lvm[12807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:58:44 compute-0 lvm[12807]: VG ceph_vg0 finished
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:44 compute-0 bash[12711]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 10:58:44 compute-0 bash[12711]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 10:58:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate[12726]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  9 10:58:44 compute-0 bash[12711]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  9 10:58:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:44 compute-0 systemd[1]: libpod-f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90.scope: Deactivated successfully.
Oct  9 10:58:44 compute-0 systemd[1]: libpod-f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90.scope: Consumed 1.462s CPU time.
Oct  9 10:58:44 compute-0 podman[12711]: 2025-10-09 10:58:44.630452753 +0000 UTC m=+1.485120505 container died f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:44 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:44 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-579ac4bf6aca8c08ae694cb64f0263c1ede33a30816a409d8e24a3be695d6229-merged.mount: Deactivated successfully.
Oct  9 10:58:44 compute-0 podman[12711]: 2025-10-09 10:58:44.680769385 +0000 UTC m=+1.535437137 container remove f19c1f1dcb4879e45878d3bbb5305b23ee13076e1a6556f3eb8847566ef01a90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 10:58:44 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:44 compute-0 podman[12966]: 2025-10-09 10:58:44.941555767 +0000 UTC m=+0.077782091 container create 0f1cbc8030bd09c2a5ccf23bf5425c59c660c8a1dd475b4fc7604fec09f48704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Oct  9 10:58:44 compute-0 podman[12966]: 2025-10-09 10:58:44.890294306 +0000 UTC m=+0.026520660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce440ae74c26daf433efa3dfaa49f0d26263a9c025e727ebcb5d50184203fdb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce440ae74c26daf433efa3dfaa49f0d26263a9c025e727ebcb5d50184203fdb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce440ae74c26daf433efa3dfaa49f0d26263a9c025e727ebcb5d50184203fdb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce440ae74c26daf433efa3dfaa49f0d26263a9c025e727ebcb5d50184203fdb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce440ae74c26daf433efa3dfaa49f0d26263a9c025e727ebcb5d50184203fdb5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:45 compute-0 podman[12966]: 2025-10-09 10:58:45.041785108 +0000 UTC m=+0.178011472 container init 0f1cbc8030bd09c2a5ccf23bf5425c59c660c8a1dd475b4fc7604fec09f48704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:45 compute-0 podman[12966]: 2025-10-09 10:58:45.047051297 +0000 UTC m=+0.183277621 container start 0f1cbc8030bd09c2a5ccf23bf5425c59c660c8a1dd475b4fc7604fec09f48704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 10:58:45 compute-0 bash[12966]: 0f1cbc8030bd09c2a5ccf23bf5425c59c660c8a1dd475b4fc7604fec09f48704
Oct  9 10:58:45 compute-0 systemd[1]: Started Ceph osd.0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:58:45 compute-0 ceph-osd[12987]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 10:58:45 compute-0 ceph-osd[12987]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct  9 10:58:45 compute-0 ceph-osd[12987]: pidfile_write: ignore empty --pid-file
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:58:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:58:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.578629642 +0000 UTC m=+0.019583048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.675816493 +0000 UTC m=+0.116769889 container create 8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_snyder, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:58:45 compute-0 systemd[1]: Started libpod-conmon-8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de.scope.
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e463e39800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.747567832 +0000 UTC m=+0.188521228 container init 8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_snyder, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.754542786 +0000 UTC m=+0.195496182 container start 8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.757262283 +0000 UTC m=+0.198215709 container attach 8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:45 compute-0 thirsty_snyder[13117]: 167 167
Oct  9 10:58:45 compute-0 systemd[1]: libpod-8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de.scope: Deactivated successfully.
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.760689733 +0000 UTC m=+0.201643129 container died 8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_snyder, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e879531f102d9097254f98ff9dca458a8371e53a9bb78b1bd3b1fa098e919075-merged.mount: Deactivated successfully.
Oct  9 10:58:45 compute-0 podman[13101]: 2025-10-09 10:58:45.796148638 +0000 UTC m=+0.237102034 container remove 8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:58:45 compute-0 systemd[1]: libpod-conmon-8db507101be71c53fd4af83e82386bfbaae004aa47289936c7d8d71670e6c3de.scope: Deactivated successfully.
Oct  9 10:58:45 compute-0 podman[13140]: 2025-10-09 10:58:45.943356292 +0000 UTC m=+0.045006771 container create 1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 10:58:45 compute-0 ceph-osd[12987]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  9 10:58:45 compute-0 systemd[1]: Started libpod-conmon-1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0.scope.
Oct  9 10:58:45 compute-0 ceph-osd[12987]: load: jerasure load: lrc 
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 10:58:45 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83742f2e3fa964be1fbf114f90de760be386f5a37f5b850851a235163c4efb73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83742f2e3fa964be1fbf114f90de760be386f5a37f5b850851a235163c4efb73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83742f2e3fa964be1fbf114f90de760be386f5a37f5b850851a235163c4efb73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83742f2e3fa964be1fbf114f90de760be386f5a37f5b850851a235163c4efb73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:46 compute-0 podman[13140]: 2025-10-09 10:58:45.92516094 +0000 UTC m=+0.026811349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:46 compute-0 podman[13140]: 2025-10-09 10:58:46.038283294 +0000 UTC m=+0.139933713 container init 1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 10:58:46 compute-0 podman[13140]: 2025-10-09 10:58:46.046206367 +0000 UTC m=+0.147856766 container start 1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:58:46 compute-0 podman[13140]: 2025-10-09 10:58:46.050490244 +0000 UTC m=+0.152140663 container attach 1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:58:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:46 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  9 10:58:46 compute-0 ceph-osd[12987]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:46 compute-0 lvm[13251]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:58:46 compute-0 lvm[13251]: VG ceph_vg0 finished
Oct  9 10:58:46 compute-0 lvm[13253]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:58:46 compute-0 lvm[13253]: VG ceph_vg0 finished
Oct  9 10:58:46 compute-0 suspicious_pascal[13161]: {}
Oct  9 10:58:46 compute-0 systemd[1]: libpod-1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0.scope: Deactivated successfully.
Oct  9 10:58:46 compute-0 systemd[1]: libpod-1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0.scope: Consumed 1.082s CPU time.
Oct  9 10:58:46 compute-0 podman[13140]: 2025-10-09 10:58:46.713225869 +0000 UTC m=+0.814876298 container died 1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_pascal, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 10:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-83742f2e3fa964be1fbf114f90de760be386f5a37f5b850851a235163c4efb73-merged.mount: Deactivated successfully.
Oct  9 10:58:46 compute-0 podman[13140]: 2025-10-09 10:58:46.800308379 +0000 UTC m=+0.901958778 container remove 1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:58:46 compute-0 systemd[1]: libpod-conmon-1a357f32d51102f9998981e9643ca5ff213e19682f89f91ed66f333d52ea0ab0.scope: Deactivated successfully.
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:46 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs mount shared_bdev_used = 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 10:58:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: RocksDB version: 7.9.2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Git sha 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: DB SUMMARY
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: DB Session ID:  YKWLOVBT9XXM58QH459E
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: CURRENT file:  CURRENT
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                         Options.error_if_exists: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.create_if_missing: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                                     Options.env: 0x55e464cafea0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                                Options.info_log: 0x55e464cb3800
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                              Options.statistics: (nil)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.use_fsync: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                              Options.db_log_dir: 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.write_buffer_manager: 0x55e464da4a00
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.unordered_write: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.row_cache: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                              Options.wal_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.two_write_queues: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.wal_compression: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.atomic_flush: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.max_background_jobs: 4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.max_background_compactions: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.max_subcompactions: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.max_open_files: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Compression algorithms supported:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kZSTD supported: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kXpressCompression supported: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kZlibCompression supported: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3be0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ece9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3be0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ece9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3be0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ece9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8a251c7a-e64f-435d-afc4-f382461b3543
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007526867257, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007526867419, "job": 1, "event": "recovery_finished"}
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  9 10:58:46 compute-0 ceph-osd[12987]: freelist init
Oct  9 10:58:46 compute-0 ceph-osd[12987]: freelist _read_cfg
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  9 10:58:46 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bluefs umount
Oct  9 10:58:46 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 10:58:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Oct  9 10:58:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bdev(0x55e464cdb000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluefs mount shared_bdev_used = 4718592
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: RocksDB version: 7.9.2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Git sha 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: DB SUMMARY
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: DB Session ID:  YKWLOVBT9XXM58QH459F
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: CURRENT file:  CURRENT
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                         Options.error_if_exists: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.create_if_missing: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                                     Options.env: 0x55e464e4a310
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                                Options.info_log: 0x55e464cb3980
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                              Options.statistics: (nil)
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.use_fsync: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                              Options.db_log_dir: 
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.write_buffer_manager: 0x55e464da4a00
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.unordered_write: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.row_cache: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                              Options.wal_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.two_write_queues: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.wal_compression: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.atomic_flush: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.max_background_jobs: 4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.max_background_compactions: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.max_subcompactions: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.max_open_files: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Compression algorithms supported:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kZSTD supported: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kXpressCompression supported: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kZlibCompression supported: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb36e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ecf350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3b20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ece9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3b20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ece9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:           Options.merge_operator: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e464cb3b20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e463ece9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.compression: LZ4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.num_levels: 7
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.bloom_locality: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                               Options.ttl: 2592000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                       Options.enable_blob_files: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                           Options.min_blob_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8a251c7a-e64f-435d-afc4-f382461b3543
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007527131310, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007527152983, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007527, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8a251c7a-e64f-435d-afc4-f382461b3543", "db_session_id": "YKWLOVBT9XXM58QH459F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007527167299, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007527, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8a251c7a-e64f-435d-afc4-f382461b3543", "db_session_id": "YKWLOVBT9XXM58QH459F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007527188210, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007527, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8a251c7a-e64f-435d-afc4-f382461b3543", "db_session_id": "YKWLOVBT9XXM58QH459F", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007527189917, "job": 1, "event": "recovery_finished"}
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  9 10:58:47 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 ceph-mon[4705]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e464eae000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: DB pointer 0x55e464e58000
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  9 10:58:47 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:58:47 compute-0 ceph-osd[12987]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e463ecf350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e463ecf350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e463ecf350#2 capacity: 460.80 MB usag
Oct  9 10:58:47 compute-0 ceph-osd[12987]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  9 10:58:47 compute-0 ceph-osd[12987]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  9 10:58:47 compute-0 ceph-osd[12987]: _get_class not permitted to load lua
Oct  9 10:58:47 compute-0 ceph-osd[12987]: _get_class not permitted to load sdk
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 load_pgs
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 load_pgs opened 0 pgs
Oct  9 10:58:47 compute-0 ceph-osd[12987]: osd.0 0 log_to_monitors true
Oct  9 10:58:47 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0[12983]: 2025-10-09T10:58:47.343+0000 7f0b50c96740 -1 osd.0 0 log_to_monitors true
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 podman[13821]: 2025-10-09 10:58:47.708600799 +0000 UTC m=+0.097205964 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:58:47 compute-0 podman[13821]: 2025-10-09 10:58:47.797169576 +0000 UTC m=+0.185774711 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Oct  9 10:58:47 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:47 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:58:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:48 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  9 10:58:48 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0 done with init, starting boot process
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0 start_boot
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  9 10:58:48 compute-0 ceph-osd[12987]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3267775411; not ready for session (expect reconnect)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2648504134; not ready for session (expect reconnect)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:48 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:48 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.246213996 +0000 UTC m=+0.071756380 container create ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.194317063 +0000 UTC m=+0.019859467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:49 compute-0 systemd[1]: Started libpod-conmon-ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209.scope.
Oct  9 10:58:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.458845155 +0000 UTC m=+0.284387549 container init ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.467189223 +0000 UTC m=+0.292731607 container start ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_taussig, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 10:58:49 compute-0 vigilant_taussig[14093]: 167 167
Oct  9 10:58:49 compute-0 systemd[1]: libpod-ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209.scope: Deactivated successfully.
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.494785066 +0000 UTC m=+0.320327470 container attach ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.495154888 +0000 UTC m=+0.320697282 container died ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_taussig, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f12ec68e57d20260923836aa6bff02872254e855c470f0248307c4d02374b5-merged.mount: Deactivated successfully.
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:49 compute-0 podman[14076]: 2025-10-09 10:58:49.770542517 +0000 UTC m=+0.596084901 container remove ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:49 compute-0 systemd[1]: libpod-conmon-ef90759af0fcd9f038262ead2cfbab9242ebab3c292a6a1a1234aeabf7995209.scope: Deactivated successfully.
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:58:49 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3267775411; not ready for session (expect reconnect)
Oct  9 10:58:49 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2648504134; not ready for session (expect reconnect)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:49 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:49 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:49 compute-0 podman[14117]: 2025-10-09 10:58:49.895156949 +0000 UTC m=+0.023494214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:58:49 compute-0 podman[14117]: 2025-10-09 10:58:49.99512141 +0000 UTC m=+0.123458635 container create 8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaum, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct  9 10:58:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:58:49 compute-0 ceph-mgr[4997]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Oct  9 10:58:49 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Oct  9 10:58:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 10:58:50 compute-0 ceph-mon[4705]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  9 10:58:50 compute-0 ceph-mon[4705]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  9 10:58:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:50 compute-0 systemd[1]: Started libpod-conmon-8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71.scope.
Oct  9 10:58:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28db675ffba0f10e502efa023c4877be5eaad64f2e8610ca2a1851cbc6d0d887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28db675ffba0f10e502efa023c4877be5eaad64f2e8610ca2a1851cbc6d0d887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28db675ffba0f10e502efa023c4877be5eaad64f2e8610ca2a1851cbc6d0d887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28db675ffba0f10e502efa023c4877be5eaad64f2e8610ca2a1851cbc6d0d887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:58:50 compute-0 podman[14117]: 2025-10-09 10:58:50.111611712 +0000 UTC m=+0.239948947 container init 8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaum, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:58:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:50 compute-0 podman[14117]: 2025-10-09 10:58:50.120161036 +0000 UTC m=+0.248498261 container start 8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Oct  9 10:58:50 compute-0 podman[14117]: 2025-10-09 10:58:50.156600173 +0000 UTC m=+0.284937398 container attach 8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:58:50 compute-0 recursing_chaum[14133]: [
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:    {
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "available": false,
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "being_replaced": false,
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "ceph_device_lvm": false,
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "lsm_data": {},
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "lvs": [],
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "path": "/dev/sr0",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "rejected_reasons": [
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "Insufficient space (<5GB)",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "Has a FileSystem"
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        ],
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        "sys_api": {
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "actuators": null,
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "device_nodes": [
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:                "sr0"
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            ],
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "devname": "sr0",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "human_readable_size": "482.00 KB",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "id_bus": "ata",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "model": "QEMU DVD-ROM",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "nr_requests": "2",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "parent": "/dev/sr0",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "partitions": {},
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "path": "/dev/sr0",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "removable": "1",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "rev": "2.5+",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "ro": "0",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "rotational": "0",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "sas_address": "",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "sas_device_handle": "",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "scheduler_mode": "mq-deadline",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "sectors": 0,
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "sectorsize": "2048",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "size": 493568.0,
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "support_discard": "2048",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "type": "disk",
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:            "vendor": "QEMU"
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:        }
Oct  9 10:58:50 compute-0 recursing_chaum[14133]:    }
Oct  9 10:58:50 compute-0 recursing_chaum[14133]: ]
Oct  9 10:58:50 compute-0 systemd[1]: libpod-8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71.scope: Deactivated successfully.
Oct  9 10:58:50 compute-0 podman[14117]: 2025-10-09 10:58:50.759414069 +0000 UTC m=+0.887751304 container died 8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 10:58:50 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-28db675ffba0f10e502efa023c4877be5eaad64f2e8610ca2a1851cbc6d0d887-merged.mount: Deactivated successfully.
Oct  9 10:58:50 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3267775411; not ready for session (expect reconnect)
Oct  9 10:58:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:50 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:50 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:50 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2648504134; not ready for session (expect reconnect)
Oct  9 10:58:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:50 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:50 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:50 compute-0 podman[14117]: 2025-10-09 10:58:50.957647647 +0000 UTC m=+1.085984872 container remove 8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaum, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 10:58:50 compute-0 systemd[1]: libpod-conmon-8ff8892e718e3c343cc0604730fdef5566494195b40cc18fda5141efcf6cba71.scope: Deactivated successfully.
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:58:51 compute-0 ceph-mon[4705]: Adjusting osd_memory_target on compute-1 to  5247M
Oct  9 10:58:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3267775411; not ready for session (expect reconnect)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2648504134; not ready for session (expect reconnect)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:51 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:51 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:52 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:52 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:52 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:58:52 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:58:52 compute-0 ceph-mon[4705]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct  9 10:58:52 compute-0 ceph-mon[4705]: Unable to set osd_memory_target on compute-0 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct  9 10:58:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:52 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 10:58:52 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3267775411; not ready for session (expect reconnect)
Oct  9 10:58:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:52 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:52 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:52 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2648504134; not ready for session (expect reconnect)
Oct  9 10:58:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:52 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:52 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 10:58:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  9 10:58:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Oct  9 10:58:53 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134] boot
Oct  9 10:58:53 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Oct  9 10:58:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:53 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 10:58:53 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 10:58:53 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:53 compute-0 ceph-mon[4705]: OSD bench result of 8841.967619 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.311 iops: 9039.559 elapsed_sec: 0.332
Oct  9 10:58:53 compute-0 ceph-osd[12987]: log_channel(cluster) log [WRN] : OSD bench result of 9039.559272 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 0 waiting for initial osdmap
Oct  9 10:58:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0[12983]: 2025-10-09T10:58:53.677+0000 7f0b4d42c640 -1 osd.0 0 waiting for initial osdmap
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  9 10:58:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-0[12983]: 2025-10-09T10:58:53.700+0000 7f0b48241640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 10:58:53 compute-0 ceph-osd[12987]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct  9 10:58:53 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3267775411; not ready for session (expect reconnect)
Oct  9 10:58:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:53 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:53 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 10:58:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  9 10:58:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:54 compute-0 ceph-mon[4705]: osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134] boot
Oct  9 10:58:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Oct  9 10:58:54 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411] boot
Oct  9 10:58:54 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Oct  9 10:58:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 10:58:54 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 10:58:54 compute-0 ceph-osd[12987]: osd.0 9 state: booting -> active
Oct  9 10:58:54 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  9 10:58:55 compute-0 ceph-mgr[4997]: [devicehealth INFO root] creating mgr pool
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct  9 10:58:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 10:58:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 10:58:55 compute-0 ceph-mon[4705]: OSD bench result of 9039.559272 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 10:58:55 compute-0 ceph-mon[4705]: osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411] boot
Oct  9 10:58:55 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  9 10:58:55 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Oct  9 10:58:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct  9 10:58:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  9 10:58:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  9 10:58:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  9 10:58:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Oct  9 10:58:56 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct  9 10:58:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  9 10:58:56 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  9 10:58:56 compute-0 ceph-mgr[4997]: [devicehealth INFO root] creating main.db for devicehealth
Oct  9 10:58:56 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Check health
Oct  9 10:58:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  9 10:58:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  9 10:58:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 10:58:56 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 10:58:56 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  9 10:58:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  9 10:58:57 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.izrudc(active, since 81s)
Oct  9 10:58:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct  9 10:58:57 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  9 10:58:57 compute-0 ceph-mon[4705]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  9 10:58:57 compute-0 ceph-mon[4705]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  9 10:58:57 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct  9 10:58:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:58:58 compute-0 ceph-osd[12987]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  9 10:58:58 compute-0 ceph-osd[12987]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  9 10:58:58 compute-0 ceph-osd[12987]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  9 10:58:58 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  9 10:59:00 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:02 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:04 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:59:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:59:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:59:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:59:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:59:05 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:59:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:59:06 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 10:59:06 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 10:59:06 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:07 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:07 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:07 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:07 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:07 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:07 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:59:07 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:59:07 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 10:59:07 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:59:07 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:59:08 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:59:08 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:59:08 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:08 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 10:59:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:59:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:08 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:08 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev f71542c7-961c-4c47-8c00-bbe7c8223589 (Updating mon deployment (+2 -> 3))
Oct  9 10:59:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 10:59:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 10:59:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 10:59:08 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 10:59:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:08 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:08 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct  9 10:59:08 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct  9 10:59:09 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 10:59:09 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:09 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:09 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:09 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 10:59:09 compute-0 ceph-mon[4705]: Deploying daemon mon.compute-2 on compute-2
Oct  9 10:59:09 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  9 10:59:09 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  9 10:59:10 compute-0 ceph-mon[4705]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  9 10:59:10 compute-0 ceph-mon[4705]: Cluster is now healthy
Oct  9 10:59:10 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:11 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct  9 10:59:11 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct  9 10:59:11 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1606452684; not ready for session (expect reconnect)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:11 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:11 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct  9 10:59:11 compute-0 ceph-mon[4705]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct  9 10:59:11 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 10:59:11 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:59:12 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1606452684; not ready for session (expect reconnect)
Oct  9 10:59:12 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:12 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:12 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 10:59:12 compute-0 python3[15218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:12 compute-0 podman[15220]: 2025-10-09 10:59:12.468420824 +0000 UTC m=+0.037451710 container create 200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438 (image=quay.io/ceph/ceph:v19, name=jolly_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 10:59:12 compute-0 systemd[1]: Started libpod-conmon-200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438.scope.
Oct  9 10:59:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085e08c7a88011a38b3ef9950cc650fa1c40eb6b1a4242a17d687b22ab11ed87/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085e08c7a88011a38b3ef9950cc650fa1c40eb6b1a4242a17d687b22ab11ed87/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085e08c7a88011a38b3ef9950cc650fa1c40eb6b1a4242a17d687b22ab11ed87/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:12 compute-0 podman[15220]: 2025-10-09 10:59:12.544083686 +0000 UTC m=+0.113114602 container init 200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438 (image=quay.io/ceph/ceph:v19, name=jolly_solomon, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:12 compute-0 podman[15220]: 2025-10-09 10:59:12.453155509 +0000 UTC m=+0.022186425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:12 compute-0 podman[15220]: 2025-10-09 10:59:12.551060987 +0000 UTC m=+0.120091883 container start 200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438 (image=quay.io/ceph/ceph:v19, name=jolly_solomon, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 10:59:12 compute-0 podman[15220]: 2025-10-09 10:59:12.55389603 +0000 UTC m=+0.122926956 container attach 200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438 (image=quay.io/ceph/ceph:v19, name=jolly_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 10:59:12 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 10:59:12 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:12 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 10:59:13 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1606452684; not ready for session (expect reconnect)
Oct  9 10:59:13 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:13 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:13 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 10:59:13 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 10:59:14 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1606452684; not ready for session (expect reconnect)
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:14 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:14 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 10:59:14 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:14 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:14 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:14 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 10:59:14 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:15 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1606452684; not ready for session (expect reconnect)
Oct  9 10:59:15 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:15 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:15 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 10:59:15 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:15 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:15 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:15 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 10:59:15 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 10:59:15 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1606452684; not ready for session (expect reconnect)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 10:59:16 compute-0 ceph-mon[4705]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : monmap epoch 2
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T10:59:11.081153+0000
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : created 2025-10-09T10:57:14.796633+0000
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.izrudc(active, since 99s)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev f71542c7-961c-4c47-8c00-bbe7c8223589 (Updating mon deployment (+2 -> 3))
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event f71542c7-961c-4c47-8c00-bbe7c8223589 (Updating mon deployment (+2 -> 3)) in 7 seconds
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 6639b296-3049-44f6-9ee2-25f84c3258aa (Updating mgr deployment (+2 -> 3))
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.agiurv on compute-2
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.agiurv on compute-2
Oct  9 10:59:16 compute-0 ceph-mon[4705]: Deploying daemon mon.compute-1 on compute-1
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0 calling monitor election
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-2 calling monitor election
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: overall HEALTH_OK
Oct  9 10:59:16 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:16 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct  9 10:59:16 compute-0 ceph-mon[4705]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 10:59:16 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 10:59:16 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982754876' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 10:59:16 compute-0 jolly_solomon[15236]: 
Oct  9 10:59:16 compute-0 jolly_solomon[15236]: {"fsid":"e990987d-9393-5e96-99ae-9e3a3319f191","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":11,"quorum":[],"quorum_names":[],"quorum_age":237,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1760007534,"num_in_osds":2,"osd_in_since":1760007515,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475205632,"bytes_avail":42466078720,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-10-09T10:57:16:742582+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-09T10:58:38.830680+0000","services":{}},"progress_events":{"f71542c7-961c-4c47-8c00-bbe7c8223589":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Oct  9 10:59:16 compute-0 systemd[1]: libpod-200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438.scope: Deactivated successfully.
Oct  9 10:59:16 compute-0 podman[15220]: 2025-10-09 10:59:16.574598595 +0000 UTC m=+4.143629491 container died 200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438 (image=quay.io/ceph/ceph:v19, name=jolly_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-085e08c7a88011a38b3ef9950cc650fa1c40eb6b1a4242a17d687b22ab11ed87-merged.mount: Deactivated successfully.
Oct  9 10:59:16 compute-0 podman[15220]: 2025-10-09 10:59:16.625508239 +0000 UTC m=+4.194539175 container remove 200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438 (image=quay.io/ceph/ceph:v19, name=jolly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:16 compute-0 systemd[1]: libpod-conmon-200aacf90d4656f7b21fedbbbfb7d7ea5435964499229c62d22e177f5bd02438.scope: Deactivated successfully.
Oct  9 10:59:16 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:17 compute-0 python3[15299]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:17 compute-0 ceph-mgr[4997]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct  9 10:59:17 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:17.084+0000 7f57499c1640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct  9 10:59:17 compute-0 podman[15300]: 2025-10-09 10:59:17.113664223 +0000 UTC m=+0.038129802 container create d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537 (image=quay.io/ceph/ceph:v19, name=determined_northcutt, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:17 compute-0 systemd[1]: Started libpod-conmon-d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537.scope.
Oct  9 10:59:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02883f138f14fe4ed34e71e98e8a81e28f39246864633d86a6cfa6b39ad3e9f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02883f138f14fe4ed34e71e98e8a81e28f39246864633d86a6cfa6b39ad3e9f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:17 compute-0 podman[15300]: 2025-10-09 10:59:17.179059516 +0000 UTC m=+0.103525125 container init d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537 (image=quay.io/ceph/ceph:v19, name=determined_northcutt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 10:59:17 compute-0 podman[15300]: 2025-10-09 10:59:17.185416576 +0000 UTC m=+0.109882155 container start d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537 (image=quay.io/ceph/ceph:v19, name=determined_northcutt, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:17 compute-0 podman[15300]: 2025-10-09 10:59:17.188688234 +0000 UTC m=+0.113153833 container attach d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537 (image=quay.io/ceph/ceph:v19, name=determined_northcutt, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 10:59:17 compute-0 podman[15300]: 2025-10-09 10:59:17.097774217 +0000 UTC m=+0.022239816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:17 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:17 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:17 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:17 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:18 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:18 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:18 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:18 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:18 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 10:59:18 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:18 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:19 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:19 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:19 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:19 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:19 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 10:59:20 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 3 completed events
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:20 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:20 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:20 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:20 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:20 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 10:59:21 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:21 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 10:59:21 compute-0 ceph-mon[4705]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T10:59:16.540045+0000
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : created 2025-10-09T10:57:14.796633+0000
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.izrudc(active, since 105s)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 10:59:21 compute-0 ceph-mon[4705]: Deploying daemon mgr.compute-2.agiurv on compute-2
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0 calling monitor election
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-2 calling monitor election
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-1 calling monitor election
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: overall HEALTH_OK
Oct  9 10:59:21 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:21 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.rtiqvm on compute-1
Oct  9 10:59:21 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.rtiqvm on compute-1
Oct  9 10:59:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 10:59:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2849695026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:22 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1378831353; not ready for session (expect reconnect)
Oct  9 10:59:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 10:59:22 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 10:59:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  9 10:59:22 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:22 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 10:59:22 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 10:59:22 compute-0 ceph-mon[4705]: Deploying daemon mgr.compute-1.rtiqvm on compute-1
Oct  9 10:59:22 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2849695026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:22 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2849695026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct  9 10:59:22 compute-0 determined_northcutt[15316]: pool 'vms' created
Oct  9 10:59:22 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct  9 10:59:22 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v58: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:22 compute-0 systemd[1]: libpod-d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537.scope: Deactivated successfully.
Oct  9 10:59:22 compute-0 podman[15300]: 2025-10-09 10:59:22.737499577 +0000 UTC m=+5.661965156 container died d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537 (image=quay.io/ceph/ceph:v19, name=determined_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-02883f138f14fe4ed34e71e98e8a81e28f39246864633d86a6cfa6b39ad3e9f7-merged.mount: Deactivated successfully.
Oct  9 10:59:22 compute-0 podman[15300]: 2025-10-09 10:59:22.775240284 +0000 UTC m=+5.699705863 container remove d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537 (image=quay.io/ceph/ceph:v19, name=determined_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 10:59:22 compute-0 systemd[1]: libpod-conmon-d7baece8a20734fb8c390622ba54f6b46383bdc8d4320e789b6159354e77d537.scope: Deactivated successfully.
Oct  9 10:59:23 compute-0 python3[15381]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.106035695 +0000 UTC m=+0.048177495 container create ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d (image=quay.io/ceph/ceph:v19, name=youthful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:23 compute-0 systemd[1]: Started libpod-conmon-ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d.scope.
Oct  9 10:59:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.083951704 +0000 UTC m=+0.026093504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a33d3d98f342a43917d8aae0a0876a9b512545af64797c10f70c75ac6af96c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a33d3d98f342a43917d8aae0a0876a9b512545af64797c10f70c75ac6af96c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.19629498 +0000 UTC m=+0.138436770 container init ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d (image=quay.io/ceph/ceph:v19, name=youthful_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.20328345 +0000 UTC m=+0.145425230 container start ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d (image=quay.io/ceph/ceph:v19, name=youthful_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.2211058 +0000 UTC m=+0.163247580 container attach ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d (image=quay.io/ceph/ceph:v19, name=youthful_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 6639b296-3049-44f6-9ee2-25f84c3258aa (Updating mgr deployment (+2 -> 3))
Oct  9 10:59:23 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 6639b296-3049-44f6-9ee2-25f84c3258aa (Updating mgr deployment (+2 -> 3)) in 7 seconds
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 1838a5b1-d688-47b0-84eb-261e1b73eef9 (Updating crash deployment (+1 -> 3))
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/140059955' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:23 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct  9 10:59:23 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/140059955' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct  9 10:59:23 compute-0 youthful_johnson[15397]: pool 'volumes' created
Oct  9 10:59:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct  9 10:59:23 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:23 compute-0 systemd[1]: libpod-ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d.scope: Deactivated successfully.
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.741862553 +0000 UTC m=+0.684004333 container died ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d (image=quay.io/ceph/ceph:v19, name=youthful_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2849695026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/140059955' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:23 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 10:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a33d3d98f342a43917d8aae0a0876a9b512545af64797c10f70c75ac6af96c4-merged.mount: Deactivated successfully.
Oct  9 10:59:23 compute-0 podman[15382]: 2025-10-09 10:59:23.780313654 +0000 UTC m=+0.722455434 container remove ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d (image=quay.io/ceph/ceph:v19, name=youthful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:59:23 compute-0 systemd[1]: libpod-conmon-ac5f6b79fbb512285d9d6bbc4a6d6012ef0e65ae2cf90c35b517f1ca64cf747d.scope: Deactivated successfully.
Oct  9 10:59:24 compute-0 python3[15462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.114816778 +0000 UTC m=+0.059166998 container create 480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823 (image=quay.io/ceph/ceph:v19, name=boring_napier, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:24 compute-0 systemd[1]: Started libpod-conmon-480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823.scope.
Oct  9 10:59:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.079026854 +0000 UTC m=+0.023377104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cb603c2e7afd3534b6a383995913f7d072cc42c2dfdb232abb793ecb45ef8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cb603c2e7afd3534b6a383995913f7d072cc42c2dfdb232abb793ecb45ef8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.191304626 +0000 UTC m=+0.135654846 container init 480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823 (image=quay.io/ceph/ceph:v19, name=boring_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.19746274 +0000 UTC m=+0.141812960 container start 480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823 (image=quay.io/ceph/ceph:v19, name=boring_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.20076667 +0000 UTC m=+0.145116890 container attach 480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823 (image=quay.io/ceph/ceph:v19, name=boring_napier, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Oct  9 10:59:24 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 10:59:24 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3484103392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:24 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  9 10:59:24 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v60: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:24 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3484103392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:24 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct  9 10:59:24 compute-0 boring_napier[15478]: pool 'backups' created
Oct  9 10:59:24 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct  9 10:59:24 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:24 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:24 compute-0 ceph-mon[4705]: Deploying daemon crash.compute-2 on compute-2
Oct  9 10:59:24 compute-0 ceph-mon[4705]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 10:59:24 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/140059955' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:24 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3484103392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:24 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3484103392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:24 compute-0 systemd[1]: libpod-480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823.scope: Deactivated successfully.
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.75168555 +0000 UTC m=+0.696035770 container died 480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823 (image=quay.io/ceph/ceph:v19, name=boring_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-45cb603c2e7afd3534b6a383995913f7d072cc42c2dfdb232abb793ecb45ef8d-merged.mount: Deactivated successfully.
Oct  9 10:59:24 compute-0 podman[15463]: 2025-10-09 10:59:24.799094628 +0000 UTC m=+0.743444848 container remove 480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823 (image=quay.io/ceph/ceph:v19, name=boring_napier, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 10:59:24 compute-0 systemd[1]: libpod-conmon-480d5dba39385e1aba2a25c8903c05d08955ca459e035be195cbbebcd65bc823.scope: Deactivated successfully.
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:25 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 1838a5b1-d688-47b0-84eb-261e1b73eef9 (Updating crash deployment (+1 -> 3))
Oct  9 10:59:25 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 1838a5b1-d688-47b0-84eb-261e1b73eef9 (Updating crash deployment (+1 -> 3)) in 1 seconds
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:25 compute-0 python3[15543]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.126614069 +0000 UTC m=+0.043104126 container create b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4 (image=quay.io/ceph/ceph:v19, name=modest_ganguly, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:59:25 compute-0 systemd[1]: Started libpod-conmon-b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4.scope.
Oct  9 10:59:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bfd904fad5d1f387456ef88f245ff85fde1bcb1d53d400831dfd80140819ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bfd904fad5d1f387456ef88f245ff85fde1bcb1d53d400831dfd80140819ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.106559397 +0000 UTC m=+0.023049474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.208235069 +0000 UTC m=+0.124725116 container init b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4 (image=quay.io/ceph/ceph:v19, name=modest_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.215018003 +0000 UTC m=+0.131508050 container start b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4 (image=quay.io/ceph/ceph:v19, name=modest_ganguly, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.218945933 +0000 UTC m=+0.135435980 container attach b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4 (image=quay.io/ceph/ceph:v19, name=modest_ganguly, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1257724780' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.599800129 +0000 UTC m=+0.041931908 container create 33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:25 compute-0 systemd[1]: Started libpod-conmon-33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72.scope.
Oct  9 10:59:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.665603805 +0000 UTC m=+0.107735604 container init 33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.671696877 +0000 UTC m=+0.113828656 container start 33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:25 compute-0 agitated_proskuriakova[15692]: 167 167
Oct  9 10:59:25 compute-0 systemd[1]: libpod-33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72.scope: Deactivated successfully.
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.583224871 +0000 UTC m=+0.025356650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.679995321 +0000 UTC m=+0.122127110 container attach 33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.680352853 +0000 UTC m=+0.122484632 container died 33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-65ed63108650f66ec891b5cd0ee74959e3cdc749f27fb4da1a2be67159eb0750-merged.mount: Deactivated successfully.
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  9 10:59:25 compute-0 podman[15672]: 2025-10-09 10:59:25.78791646 +0000 UTC m=+0.230048259 container remove 33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1257724780' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:25 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct  9 10:59:25 compute-0 modest_ganguly[15608]: pool 'images' created
Oct  9 10:59:25 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct  9 10:59:25 compute-0 systemd[1]: libpod-conmon-33647f8eb94e053f5989fc65a80e44433430f5d6359e4279052bc903875a1e72.scope: Deactivated successfully.
Oct  9 10:59:25 compute-0 systemd[1]: libpod-b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4.scope: Deactivated successfully.
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.811459809 +0000 UTC m=+0.727949856 container died b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4 (image=quay.io/ceph/ceph:v19, name=modest_ganguly, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:59:25 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:25 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-24bfd904fad5d1f387456ef88f245ff85fde1bcb1d53d400831dfd80140819ec-merged.mount: Deactivated successfully.
Oct  9 10:59:25 compute-0 podman[15548]: 2025-10-09 10:59:25.867958308 +0000 UTC m=+0.784448355 container remove b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4 (image=quay.io/ceph/ceph:v19, name=modest_ganguly, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:25 compute-0 systemd[1]: libpod-conmon-b954b3f4c0e12355ae654de8d0a0da327cc9a3f28ee643c7e02deea0c5d864d4.scope: Deactivated successfully.
Oct  9 10:59:25 compute-0 podman[15727]: 2025-10-09 10:59:25.932376059 +0000 UTC m=+0.039360573 container create e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:25 compute-0 systemd[1]: Started libpod-conmon-e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459.scope.
Oct  9 10:59:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b017d57cbaf18d3a16f1dfe5270578f5699ccb4b63a4e73a634bc30d5d34c56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b017d57cbaf18d3a16f1dfe5270578f5699ccb4b63a4e73a634bc30d5d34c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b017d57cbaf18d3a16f1dfe5270578f5699ccb4b63a4e73a634bc30d5d34c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b017d57cbaf18d3a16f1dfe5270578f5699ccb4b63a4e73a634bc30d5d34c56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b017d57cbaf18d3a16f1dfe5270578f5699ccb4b63a4e73a634bc30d5d34c56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 podman[15727]: 2025-10-09 10:59:25.916158661 +0000 UTC m=+0.023143205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:26 compute-0 podman[15727]: 2025-10-09 10:59:26.022134226 +0000 UTC m=+0.129118760 container init e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_meitner, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1257724780' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:26 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1257724780' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:26 compute-0 podman[15727]: 2025-10-09 10:59:26.03344297 +0000 UTC m=+0.140427484 container start e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_meitner, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:59:26 compute-0 podman[15727]: 2025-10-09 10:59:26.037249526 +0000 UTC m=+0.144234040 container attach e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:59:26 compute-0 python3[15772]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.180041329 +0000 UTC m=+0.040171610 container create 1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207 (image=quay.io/ceph/ceph:v19, name=happy_shannon, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 10:59:26 compute-0 systemd[1]: Started libpod-conmon-1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207.scope.
Oct  9 10:59:26 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d31ba7734c5e56b61be1a78d906393a53ea775cb7b40d7a8ea83d8bc7cf4f8a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d31ba7734c5e56b61be1a78d906393a53ea775cb7b40d7a8ea83d8bc7cf4f8a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.2333002 +0000 UTC m=+0.093430501 container init 1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207 (image=quay.io/ceph/ceph:v19, name=happy_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.239166104 +0000 UTC m=+0.099296385 container start 1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207 (image=quay.io/ceph/ceph:v19, name=happy_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.242237036 +0000 UTC m=+0.102367317 container attach 1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207 (image=quay.io/ceph/ceph:v19, name=happy_shannon, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.162010123 +0000 UTC m=+0.022140434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:26 compute-0 loving_meitner[15767]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:59:26 compute-0 loving_meitner[15767]: --> All data devices are unavailable
Oct  9 10:59:26 compute-0 systemd[1]: libpod-e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459.scope: Deactivated successfully.
Oct  9 10:59:26 compute-0 podman[15727]: 2025-10-09 10:59:26.372408431 +0000 UTC m=+0.479392955 container died e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b017d57cbaf18d3a16f1dfe5270578f5699ccb4b63a4e73a634bc30d5d34c56-merged.mount: Deactivated successfully.
Oct  9 10:59:26 compute-0 podman[15727]: 2025-10-09 10:59:26.415208736 +0000 UTC m=+0.522193260 container remove e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_meitner, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:26 compute-0 systemd[1]: libpod-conmon-e87ad46794c924eb9ab7da8142cc957ebe29ae7a7fe6bac9f4aa6102f16ff459.scope: Deactivated successfully.
Oct  9 10:59:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3025487584' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"} v 0)
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]: dispatch
Oct  9 10:59:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  9 10:59:26 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 5 completed events
Oct  9 10:59:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3025487584' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]': finished
Oct  9 10:59:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Oct  9 10:59:26 compute-0 happy_shannon[15792]: pool 'cephfs.cephfs.meta' created
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Oct  9 10:59:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:26 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:26 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:26 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:26 compute-0 systemd[1]: libpod-1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207.scope: Deactivated successfully.
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.631878343 +0000 UTC m=+0.492008624 container died 1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207 (image=quay.io/ceph/ceph:v19, name=happy_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d31ba7734c5e56b61be1a78d906393a53ea775cb7b40d7a8ea83d8bc7cf4f8a-merged.mount: Deactivated successfully.
Oct  9 10:59:26 compute-0 podman[15775]: 2025-10-09 10:59:26.663681894 +0000 UTC m=+0.523812175 container remove 1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207 (image=quay.io/ceph/ceph:v19, name=happy_shannon, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:59:26 compute-0 systemd[1]: libpod-conmon-1d81a124f92e9cada61fc5c4d566091e17b40ab5f6934a45330fdbb1436ed207.scope: Deactivated successfully.
Oct  9 10:59:26 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v64: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:26 compute-0 podman[15968]: 2025-10-09 10:59:26.917565861 +0000 UTC m=+0.045656861 container create f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:59:26 compute-0 python3[15953]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:26 compute-0 systemd[1]: Started libpod-conmon-f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d.scope.
Oct  9 10:59:26 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:26 compute-0 podman[15968]: 2025-10-09 10:59:26.981452214 +0000 UTC m=+0.109543234 container init f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 10:59:26 compute-0 podman[15984]: 2025-10-09 10:59:26.985116614 +0000 UTC m=+0.039817057 container create a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0 (image=quay.io/ceph/ceph:v19, name=vigorous_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:59:26 compute-0 podman[15968]: 2025-10-09 10:59:26.987489333 +0000 UTC m=+0.115580333 container start f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:26 compute-0 upbeat_morse[15990]: 167 167
Oct  9 10:59:26 compute-0 systemd[1]: libpod-f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d.scope: Deactivated successfully.
Oct  9 10:59:26 compute-0 podman[15968]: 2025-10-09 10:59:26.992495309 +0000 UTC m=+0.120586329 container attach f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct  9 10:59:26 compute-0 podman[15968]: 2025-10-09 10:59:26.99283242 +0000 UTC m=+0.120923420 container died f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 10:59:26 compute-0 podman[15968]: 2025-10-09 10:59:26.897388813 +0000 UTC m=+0.025479833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c2d3dd6aa203f042ecdbb2a5215bca6c9309df1a04451a3a02749d96d8d1777-merged.mount: Deactivated successfully.
Oct  9 10:59:27 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3025487584' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:27 compute-0 ceph-mon[4705]: from='client.? 192.168.122.102:0/1480029380' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]: dispatch
Oct  9 10:59:27 compute-0 ceph-mon[4705]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]: dispatch
Oct  9 10:59:27 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3025487584' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:27 compute-0 ceph-mon[4705]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]': finished
Oct  9 10:59:27 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:27 compute-0 podman[15968]: 2025-10-09 10:59:27.040081843 +0000 UTC m=+0.168172843 container remove f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:27 compute-0 systemd[1]: libpod-conmon-f8c50987d483bf76b86985a3e30c0e4cbf5907b1cf4136243e73228b4230a11d.scope: Deactivated successfully.
Oct  9 10:59:27 compute-0 systemd[1]: Started libpod-conmon-a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0.scope.
Oct  9 10:59:27 compute-0 podman[15984]: 2025-10-09 10:59:26.968132743 +0000 UTC m=+0.022833216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa51ebf342c88c8a9cd606f4c418eafef3f6aa325a29694cb5111790c5cb5f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa51ebf342c88c8a9cd606f4c418eafef3f6aa325a29694cb5111790c5cb5f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:27 compute-0 podman[15984]: 2025-10-09 10:59:27.087440129 +0000 UTC m=+0.142140592 container init a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0 (image=quay.io/ceph/ceph:v19, name=vigorous_lamport, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:27 compute-0 podman[15984]: 2025-10-09 10:59:27.098748673 +0000 UTC m=+0.153449106 container start a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0 (image=quay.io/ceph/ceph:v19, name=vigorous_lamport, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:59:27 compute-0 podman[15984]: 2025-10-09 10:59:27.107250914 +0000 UTC m=+0.161951387 container attach a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0 (image=quay.io/ceph/ceph:v19, name=vigorous_lamport, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.190726995 +0000 UTC m=+0.045916730 container create d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 10:59:27 compute-0 systemd[1]: Started libpod-conmon-d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e.scope.
Oct  9 10:59:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2518ac97010501a1812633410ce6996aaf1db3def8b996858e21abb7ac0d9f71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2518ac97010501a1812633410ce6996aaf1db3def8b996858e21abb7ac0d9f71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2518ac97010501a1812633410ce6996aaf1db3def8b996858e21abb7ac0d9f71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2518ac97010501a1812633410ce6996aaf1db3def8b996858e21abb7ac0d9f71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.168036974 +0000 UTC m=+0.023226749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.278424465 +0000 UTC m=+0.133614230 container init d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.284804256 +0000 UTC m=+0.139994001 container start d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.293160142 +0000 UTC m=+0.148349917 container attach d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_agnesi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 10:59:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 10:59:27 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]: {
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:    "0": [
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:        {
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "devices": [
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "/dev/loop3"
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            ],
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "lv_name": "ceph_lv0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "lv_size": "21470642176",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e990987d-9393-5e96-99ae-9e3a3319f191,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0ea02d81-16d9-4b32-9888-cc7ebc83243e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "lv_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "name": "ceph_lv0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "tags": {
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.block_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.cluster_fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.cluster_name": "ceph",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.crush_device_class": "",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.encrypted": "0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.osd_fsid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.osd_id": "0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.type": "block",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.vdo": "0",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:                "ceph.with_tpm": "0"
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            },
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "type": "block",
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:            "vg_name": "ceph_vg0"
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:        }
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]:    ]
Oct  9 10:59:27 compute-0 lucid_agnesi[16060]: }
Oct  9 10:59:27 compute-0 systemd[1]: libpod-d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e.scope: Deactivated successfully.
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.596601058 +0000 UTC m=+0.451790803 container died d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:59:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  9 10:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2518ac97010501a1812633410ce6996aaf1db3def8b996858e21abb7ac0d9f71-merged.mount: Deactivated successfully.
Oct  9 10:59:27 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Oct  9 10:59:27 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Oct  9 10:59:27 compute-0 vigorous_lamport[16016]: pool 'cephfs.cephfs.data' created
Oct  9 10:59:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:27 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:27 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:27 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:27 compute-0 podman[16025]: 2025-10-09 10:59:27.660268243 +0000 UTC m=+0.515457988 container remove d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:27 compute-0 systemd[1]: libpod-a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0.scope: Deactivated successfully.
Oct  9 10:59:27 compute-0 podman[15984]: 2025-10-09 10:59:27.671120472 +0000 UTC m=+0.725820915 container died a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0 (image=quay.io/ceph/ceph:v19, name=vigorous_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:27 compute-0 systemd[1]: libpod-conmon-d34764e2fdd9fe37bfda3afe880bf49a5d8fb1c5ec5ee756251e4cfdf900c82e.scope: Deactivated successfully.
Oct  9 10:59:27 compute-0 podman[15984]: 2025-10-09 10:59:27.741363595 +0000 UTC m=+0.796064038 container remove a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0 (image=quay.io/ceph/ceph:v19, name=vigorous_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:27 compute-0 systemd[1]: libpod-conmon-a538cdbf8a34461816ea72877cf5335029284ca7b5805fdc3cd00da177183ec0.scope: Deactivated successfully.
Oct  9 10:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fa51ebf342c88c8a9cd606f4c418eafef3f6aa325a29694cb5111790c5cb5f6-merged.mount: Deactivated successfully.
Oct  9 10:59:28 compute-0 python3[16170]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:28 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1791307082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:28 compute-0 ceph-mon[4705]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 10:59:28 compute-0 ceph-mon[4705]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 10:59:28 compute-0 podman[16189]: 2025-10-09 10:59:28.097332458 +0000 UTC m=+0.023102305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:28 compute-0 podman[16189]: 2025-10-09 10:59:28.239630004 +0000 UTC m=+0.165399831 container create 0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb (image=quay.io/ceph/ceph:v19, name=practical_neumann, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 10:59:28 compute-0 systemd[1]: Started libpod-conmon-0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb.scope.
Oct  9 10:59:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b9899ad6bd19423934f03deba0c27ce150054462bd2bba47417bf4af360a22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b9899ad6bd19423934f03deba0c27ce150054462bd2bba47417bf4af360a22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:28 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv started
Oct  9 10:59:28 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mgr.compute-2.agiurv 192.168.122.102:0/3037043035; not ready for session (expect reconnect)
Oct  9 10:59:28 compute-0 podman[16189]: 2025-10-09 10:59:28.358281548 +0000 UTC m=+0.284051385 container init 0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb (image=quay.io/ceph/ceph:v19, name=practical_neumann, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 10:59:28 compute-0 podman[16189]: 2025-10-09 10:59:28.364595567 +0000 UTC m=+0.290365394 container start 0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb (image=quay.io/ceph/ceph:v19, name=practical_neumann, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:28 compute-0 podman[16189]: 2025-10-09 10:59:28.370585686 +0000 UTC m=+0.296355523 container attach 0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb (image=quay.io/ceph/ceph:v19, name=practical_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.434728227 +0000 UTC m=+0.039264120 container create fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 10:59:28 compute-0 systemd[1]: Started libpod-conmon-fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6.scope.
Oct  9 10:59:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.508368492 +0000 UTC m=+0.112904395 container init fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.414195308 +0000 UTC m=+0.018731221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.5152513 +0000 UTC m=+0.119787203 container start fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:28 compute-0 wonderful_joliot[16254]: 167 167
Oct  9 10:59:28 compute-0 systemd[1]: libpod-fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6.scope: Deactivated successfully.
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.52009041 +0000 UTC m=+0.124626303 container attach fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.522677735 +0000 UTC m=+0.127213628 container died fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf3e064f1145a89b0921a0d26631bdb41bc5760ab263c553bc45b41ea9ed9f1-merged.mount: Deactivated successfully.
Oct  9 10:59:28 compute-0 podman[16227]: 2025-10-09 10:59:28.560210487 +0000 UTC m=+0.164746380 container remove fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:28 compute-0 systemd[1]: libpod-conmon-fd102dabfdf37b4250d68cb0114b78a6c388dcbe766d8fc4970a7c35613eb8e6.scope: Deactivated successfully.
Oct  9 10:59:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  9 10:59:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Oct  9 10:59:28 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Oct  9 10:59:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:28 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:28 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:28 compute-0 podman[16286]: 2025-10-09 10:59:28.724150718 +0000 UTC m=+0.038298277 container create 97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 10:59:28 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct  9 10:59:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3743570852' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  9 10:59:28 compute-0 systemd[1]: Started libpod-conmon-97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110.scope.
Oct  9 10:59:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80abd1f67b4afdcf6300cb23385231e9ed734e65f404f36dda751821ab3464ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80abd1f67b4afdcf6300cb23385231e9ed734e65f404f36dda751821ab3464ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80abd1f67b4afdcf6300cb23385231e9ed734e65f404f36dda751821ab3464ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80abd1f67b4afdcf6300cb23385231e9ed734e65f404f36dda751821ab3464ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:28 compute-0 podman[16286]: 2025-10-09 10:59:28.707500868 +0000 UTC m=+0.021648447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:28 compute-0 podman[16286]: 2025-10-09 10:59:28.804454215 +0000 UTC m=+0.118601774 container init 97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_davinci, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:28 compute-0 podman[16286]: 2025-10-09 10:59:28.810619858 +0000 UTC m=+0.124767417 container start 97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_davinci, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:59:28 compute-0 podman[16286]: 2025-10-09 10:59:28.81369557 +0000 UTC m=+0.127843149 container attach 97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3743570852' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.izrudc(active, since 112s), standbys: compute-2.agiurv
Oct  9 10:59:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"} v 0)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct  9 10:59:29 compute-0 lvm[16378]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:59:29 compute-0 lvm[16378]: VG ceph_vg0 finished
Oct  9 10:59:29 compute-0 trusting_davinci[16304]: {}
Oct  9 10:59:29 compute-0 systemd[1]: libpod-97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110.scope: Deactivated successfully.
Oct  9 10:59:29 compute-0 podman[16286]: 2025-10-09 10:59:29.516819255 +0000 UTC m=+0.830966814 container died 97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_davinci, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:59:29 compute-0 systemd[1]: libpod-97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110.scope: Consumed 1.100s CPU time.
Oct  9 10:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-80abd1f67b4afdcf6300cb23385231e9ed734e65f404f36dda751821ab3464ba-merged.mount: Deactivated successfully.
Oct  9 10:59:29 compute-0 podman[16286]: 2025-10-09 10:59:29.55719343 +0000 UTC m=+0.871340989 container remove 97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:29 compute-0 systemd[1]: libpod-conmon-97c9fc0869686db0085cb46576c651b73e705c3fb0b2358f2bf54ff0d4af5110.scope: Deactivated successfully.
Oct  9 10:59:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3743570852' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  9 10:59:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Oct  9 10:59:29 compute-0 practical_neumann[16223]: enabled application 'rbd' on pool 'vms'
Oct  9 10:59:29 compute-0 systemd[1]: libpod-0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb.scope: Deactivated successfully.
Oct  9 10:59:29 compute-0 podman[16189]: 2025-10-09 10:59:29.746227792 +0000 UTC m=+1.671997619 container died 0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb (image=quay.io/ceph/ceph:v19, name=practical_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Oct  9 10:59:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:29 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b9899ad6bd19423934f03deba0c27ce150054462bd2bba47417bf4af360a22-merged.mount: Deactivated successfully.
Oct  9 10:59:30 compute-0 podman[16189]: 2025-10-09 10:59:30.032833131 +0000 UTC m=+1.958602958 container remove 0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb (image=quay.io/ceph/ceph:v19, name=practical_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 10:59:30 compute-0 systemd[1]: libpod-conmon-0022e08f10a2d0eef2e5fbd7a292b674d5e83d97f6be4f5f849e76aad713e0bb.scope: Deactivated successfully.
Oct  9 10:59:30 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:30 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:30 compute-0 ceph-mon[4705]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 10:59:30 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3743570852' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  9 10:59:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm started
Oct  9 10:59:30 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from mgr.compute-1.rtiqvm 192.168.122.101:0/983996971; not ready for session (expect reconnect)
Oct  9 10:59:30 compute-0 python3[16433]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:30 compute-0 podman[16434]: 2025-10-09 10:59:30.390977515 +0000 UTC m=+0.042628071 container create 38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1 (image=quay.io/ceph/ceph:v19, name=nifty_roentgen, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:59:30 compute-0 systemd[1]: Started libpod-conmon-38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1.scope.
Oct  9 10:59:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a277c29509fca3544906a9d2ff8344c22db34c0a955304b89dcc79dc2a7ab1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a277c29509fca3544906a9d2ff8344c22db34c0a955304b89dcc79dc2a7ab1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:30 compute-0 podman[16434]: 2025-10-09 10:59:30.371267893 +0000 UTC m=+0.022918469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:30 compute-0 podman[16434]: 2025-10-09 10:59:30.468278392 +0000 UTC m=+0.119928958 container init 38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1 (image=quay.io/ceph/ceph:v19, name=nifty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:59:30 compute-0 podman[16434]: 2025-10-09 10:59:30.474669423 +0000 UTC m=+0.126319979 container start 38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1 (image=quay.io/ceph/ceph:v19, name=nifty_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:59:30 compute-0 podman[16434]: 2025-10-09 10:59:30.483395922 +0000 UTC m=+0.135046508 container attach 38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1 (image=quay.io/ceph/ceph:v19, name=nifty_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 10:59:30 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct  9 10:59:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2440602364' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  9 10:59:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  9 10:59:31 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2440602364' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  9 10:59:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2440602364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  9 10:59:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Oct  9 10:59:31 compute-0 nifty_roentgen[16449]: enabled application 'rbd' on pool 'volumes'
Oct  9 10:59:31 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Oct  9 10:59:31 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.izrudc(active, since 115s), standbys: compute-2.agiurv, compute-1.rtiqvm
Oct  9 10:59:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:31 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"} v 0)
Oct  9 10:59:31 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:31 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct  9 10:59:31 compute-0 systemd[1]: libpod-38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1.scope: Deactivated successfully.
Oct  9 10:59:31 compute-0 podman[16434]: 2025-10-09 10:59:31.224308915 +0000 UTC m=+0.875959461 container died 38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1 (image=quay.io/ceph/ceph:v19, name=nifty_roentgen, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a277c29509fca3544906a9d2ff8344c22db34c0a955304b89dcc79dc2a7ab1b-merged.mount: Deactivated successfully.
Oct  9 10:59:31 compute-0 podman[16434]: 2025-10-09 10:59:31.259072446 +0000 UTC m=+0.910723012 container remove 38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1 (image=quay.io/ceph/ceph:v19, name=nifty_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:31 compute-0 systemd[1]: libpod-conmon-38cd192bc35193b26256321305bf88982e2b466a8c9b286c539ddda8af3169f1.scope: Deactivated successfully.
Oct  9 10:59:31 compute-0 python3[16511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:31 compute-0 podman[16512]: 2025-10-09 10:59:31.614766199 +0000 UTC m=+0.051149343 container create 8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c (image=quay.io/ceph/ceph:v19, name=quizzical_lamarr, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:59:31 compute-0 systemd[1]: Started libpod-conmon-8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c.scope.
Oct  9 10:59:31 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13eca96399f19bc3d0adc39518d33b2347c55d487de042cd87fb9afa3ff2b7d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13eca96399f19bc3d0adc39518d33b2347c55d487de042cd87fb9afa3ff2b7d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:31 compute-0 podman[16512]: 2025-10-09 10:59:31.595759471 +0000 UTC m=+0.032142635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:31 compute-0 podman[16512]: 2025-10-09 10:59:31.693440561 +0000 UTC m=+0.129823735 container init 8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c (image=quay.io/ceph/ceph:v19, name=quizzical_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:31 compute-0 podman[16512]: 2025-10-09 10:59:31.700156603 +0000 UTC m=+0.136539747 container start 8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c (image=quay.io/ceph/ceph:v19, name=quizzical_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:31 compute-0 podman[16512]: 2025-10-09 10:59:31.703903517 +0000 UTC m=+0.140286671 container attach 8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c (image=quay.io/ceph/ceph:v19, name=quizzical_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct  9 10:59:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2976644364' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Oct  9 10:59:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:32 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:32 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct  9 10:59:32 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  9 10:59:32 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2440602364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  9 10:59:32 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2976644364' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  9 10:59:32 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  9 10:59:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2976644364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Oct  9 10:59:32 compute-0 quizzical_lamarr[16527]: enabled application 'rbd' on pool 'backups'
Oct  9 10:59:32 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:32 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:32 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:32 compute-0 systemd[1]: libpod-8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c.scope: Deactivated successfully.
Oct  9 10:59:32 compute-0 podman[16512]: 2025-10-09 10:59:32.262417398 +0000 UTC m=+0.698800552 container died 8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c (image=quay.io/ceph/ceph:v19, name=quizzical_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 10:59:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-13eca96399f19bc3d0adc39518d33b2347c55d487de042cd87fb9afa3ff2b7d3-merged.mount: Deactivated successfully.
Oct  9 10:59:32 compute-0 podman[16512]: 2025-10-09 10:59:32.301244223 +0000 UTC m=+0.737627377 container remove 8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c (image=quay.io/ceph/ceph:v19, name=quizzical_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:59:32 compute-0 systemd[1]: libpod-conmon-8d69ea6194cfab6d82e79452e47cab68a46cd62aa594f1a9ba4ce9721bb1821c.scope: Deactivated successfully.
Oct  9 10:59:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:32 compute-0 python3[16589]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:32 compute-0 podman[16590]: 2025-10-09 10:59:32.630112459 +0000 UTC m=+0.036961844 container create 07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0 (image=quay.io/ceph/ceph:v19, name=competent_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 10:59:32 compute-0 systemd[1]: Started libpod-conmon-07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0.scope.
Oct  9 10:59:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4d2148cfa7e928cca151ca7b77fc7fa1ce04893df11aa1f6c1f9088d893ac5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4d2148cfa7e928cca151ca7b77fc7fa1ce04893df11aa1f6c1f9088d893ac5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:32 compute-0 podman[16590]: 2025-10-09 10:59:32.68882467 +0000 UTC m=+0.095674085 container init 07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0 (image=quay.io/ceph/ceph:v19, name=competent_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:59:32 compute-0 podman[16590]: 2025-10-09 10:59:32.693490584 +0000 UTC m=+0.100339979 container start 07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0 (image=quay.io/ceph/ceph:v19, name=competent_chebyshev, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 10:59:32 compute-0 podman[16590]: 2025-10-09 10:59:32.697137885 +0000 UTC m=+0.103987310 container attach 07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0 (image=quay.io/ceph/ceph:v19, name=competent_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:59:32 compute-0 podman[16590]: 2025-10-09 10:59:32.613840861 +0000 UTC m=+0.020690286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:32 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct  9 10:59:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2365709173' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  9 10:59:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  9 10:59:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2365709173' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  9 10:59:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Oct  9 10:59:33 compute-0 competent_chebyshev[16607]: enabled application 'rbd' on pool 'images'
Oct  9 10:59:33 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Oct  9 10:59:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:33 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:33 compute-0 ceph-mon[4705]: Deploying daemon osd.2 on compute-2
Oct  9 10:59:33 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2976644364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  9 10:59:33 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2365709173' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  9 10:59:33 compute-0 systemd[1]: libpod-07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0.scope: Deactivated successfully.
Oct  9 10:59:33 compute-0 podman[16590]: 2025-10-09 10:59:33.276692602 +0000 UTC m=+0.683541997 container died 07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0 (image=quay.io/ceph/ceph:v19, name=competent_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf4d2148cfa7e928cca151ca7b77fc7fa1ce04893df11aa1f6c1f9088d893ac5-merged.mount: Deactivated successfully.
Oct  9 10:59:33 compute-0 podman[16590]: 2025-10-09 10:59:33.308884857 +0000 UTC m=+0.715734252 container remove 07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0 (image=quay.io/ceph/ceph:v19, name=competent_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:59:33 compute-0 systemd[1]: libpod-conmon-07553f3152dac9b25105e9d96554665bc96af3f0d4a4d334bec35ccd16fbfdf0.scope: Deactivated successfully.
Oct  9 10:59:33 compute-0 python3[16669]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:33 compute-0 podman[16670]: 2025-10-09 10:59:33.630260516 +0000 UTC m=+0.038000118 container create 5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062 (image=quay.io/ceph/ceph:v19, name=hungry_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:33 compute-0 systemd[1]: Started libpod-conmon-5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062.scope.
Oct  9 10:59:33 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5300fbaba03f8f5d9eb9aa74e5a2a1564330f2a73bc2bc8e61293b039d1920ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5300fbaba03f8f5d9eb9aa74e5a2a1564330f2a73bc2bc8e61293b039d1920ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:33 compute-0 podman[16670]: 2025-10-09 10:59:33.615706525 +0000 UTC m=+0.023446147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:33 compute-0 podman[16670]: 2025-10-09 10:59:33.710670085 +0000 UTC m=+0.118409687 container init 5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062 (image=quay.io/ceph/ceph:v19, name=hungry_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Oct  9 10:59:33 compute-0 podman[16670]: 2025-10-09 10:59:33.716302731 +0000 UTC m=+0.124042333 container start 5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062 (image=quay.io/ceph/ceph:v19, name=hungry_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:33 compute-0 podman[16670]: 2025-10-09 10:59:33.719443325 +0000 UTC m=+0.127182947 container attach 5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062 (image=quay.io/ceph/ceph:v19, name=hungry_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:59:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Oct  9 10:59:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/700720401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  9 10:59:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  9 10:59:34 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2365709173' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  9 10:59:34 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/700720401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  9 10:59:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/700720401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  9 10:59:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct  9 10:59:34 compute-0 hungry_booth[16685]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  9 10:59:34 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct  9 10:59:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:34 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:34 compute-0 systemd[1]: libpod-5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062.scope: Deactivated successfully.
Oct  9 10:59:34 compute-0 podman[16670]: 2025-10-09 10:59:34.313578305 +0000 UTC m=+0.721317907 container died 5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062 (image=quay.io/ceph/ceph:v19, name=hungry_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5300fbaba03f8f5d9eb9aa74e5a2a1564330f2a73bc2bc8e61293b039d1920ff-merged.mount: Deactivated successfully.
Oct  9 10:59:34 compute-0 podman[16670]: 2025-10-09 10:59:34.358587144 +0000 UTC m=+0.766326746 container remove 5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062 (image=quay.io/ceph/ceph:v19, name=hungry_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:34 compute-0 systemd[1]: libpod-conmon-5f2e635e9688af941d364313ccca4111855f0e1c4147372a8f3abbd2be99c062.scope: Deactivated successfully.
Oct  9 10:59:34 compute-0 python3[16748]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:34 compute-0 podman[16749]: 2025-10-09 10:59:34.70803378 +0000 UTC m=+0.060855873 container create 7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c (image=quay.io/ceph/ceph:v19, name=clever_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:34 compute-0 systemd[1]: Started libpod-conmon-7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c.scope.
Oct  9 10:59:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21d8348d3fb20b6cbb1a51767dfefaf679daa24b0c0af5cfe82c917fbf86dc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:34 compute-0 podman[16749]: 2025-10-09 10:59:34.674594705 +0000 UTC m=+0.027416828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21d8348d3fb20b6cbb1a51767dfefaf679daa24b0c0af5cfe82c917fbf86dc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:34 compute-0 podman[16749]: 2025-10-09 10:59:34.781661486 +0000 UTC m=+0.134483609 container init 7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c (image=quay.io/ceph/ceph:v19, name=clever_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:34 compute-0 podman[16749]: 2025-10-09 10:59:34.789442463 +0000 UTC m=+0.142264556 container start 7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c (image=quay.io/ceph/ceph:v19, name=clever_cerf, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:34 compute-0 podman[16749]: 2025-10-09 10:59:34.793147325 +0000 UTC m=+0.145969418 container attach 7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c (image=quay.io/ceph/ceph:v19, name=clever_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_10:59:34
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: [balancer INFO root] do_upmap
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', 'volumes']
Oct  9 10:59:34 compute-0 ceph-mgr[4997]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 10:59:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:59:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2692541067' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  9 10:59:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 10:59:35 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/700720401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  9 10:59:35 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:35 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2692541067' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2692541067' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  9 10:59:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct  9 10:59:35 compute-0 clever_cerf[16765]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct  9 10:59:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 5d745d46-71cb-4744-beab-01b17dc3fa33 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct  9 10:59:35 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:35 compute-0 systemd[1]: libpod-7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c.scope: Deactivated successfully.
Oct  9 10:59:35 compute-0 podman[16790]: 2025-10-09 10:59:35.396766989 +0000 UTC m=+0.023142457 container died 7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c (image=quay.io/ceph/ceph:v19, name=clever_cerf, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 10:59:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c21d8348d3fb20b6cbb1a51767dfefaf679daa24b0c0af5cfe82c917fbf86dc5-merged.mount: Deactivated successfully.
Oct  9 10:59:35 compute-0 podman[16790]: 2025-10-09 10:59:35.569388527 +0000 UTC m=+0.195763985 container remove 7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c (image=quay.io/ceph/ceph:v19, name=clever_cerf, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 10:59:35 compute-0 systemd[1]: libpod-conmon-7db8480d97b98cd2db4348e84ea9363bf411941491e3b6a09455638b6d8e3a2c.scope: Deactivated successfully.
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:59:36 compute-0 ceph-mon[4705]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:36 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2692541067' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  9 10:59:36 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:36 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:36 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 4eb4d131-d4d9-4253-b6c2-b75a76b3edd8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:36 compute-0 python3[16880]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 10:59:36 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 10:59:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:36 compute-0 python3[16951]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007576.2734544-33819-251117747643062/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:59:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  9 10:59:37 compute-0 python3[17053]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct  9 10:59:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:37 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:37 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27 pruub=11.140234947s) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active pruub 61.408672333s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:37 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 4ad8f058-b30f-4e26-9505-d1aa30318f61 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  9 10:59:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct  9 10:59:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:37 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27 pruub=11.140234947s) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown pruub 61.408672333s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:37 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:37 compute-0 python3[17128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007577.2449334-33833-12262316438067/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=17f2de1495fb97deba536315b8de0da540b580ae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:59:38 compute-0 python3[17178]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.314887278 +0000 UTC m=+0.043077076 container create 16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26 (image=quay.io/ceph/ceph:v19, name=angry_gates, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 10:59:38 compute-0 systemd[1]: Started libpod-conmon-16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26.scope.
Oct  9 10:59:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb069bf205e1a67658becc8bdcaecc8b9b273bc5feaf14953d6304dadca578/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb069bf205e1a67658becc8bdcaecc8b9b273bc5feaf14953d6304dadca578/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb069bf205e1a67658becc8bdcaecc8b9b273bc5feaf14953d6304dadca578/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.390431177 +0000 UTC m=+0.118620995 container init 16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26 (image=quay.io/ceph/ceph:v19, name=angry_gates, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.298177056 +0000 UTC m=+0.026366874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.395581837 +0000 UTC m=+0.123771635 container start 16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26 (image=quay.io/ceph/ceph:v19, name=angry_gates, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.398475263 +0000 UTC m=+0.126665061 container attach 16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26 (image=quay.io/ceph/ceph:v19, name=angry_gates, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:38 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 596f0d1c-926b-4801-9ee6-90f890828d52 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e28 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=14/15 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=14/14 les/c/f=15/15/0 sis=27) [0] r=0 lpr=27 pi=[14,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:38 compute-0 ceph-mon[4705]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: Cluster is now healthy
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v81: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2229510725' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 10:59:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2229510725' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 10:59:38 compute-0 angry_gates[17195]: 
Oct  9 10:59:38 compute-0 angry_gates[17195]: [global]
Oct  9 10:59:38 compute-0 angry_gates[17195]: #011fsid = e990987d-9393-5e96-99ae-9e3a3319f191
Oct  9 10:59:38 compute-0 angry_gates[17195]: #011mon_host = 192.168.122.100
Oct  9 10:59:38 compute-0 systemd[1]: libpod-16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26.scope: Deactivated successfully.
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.781327194 +0000 UTC m=+0.509517002 container died 16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26 (image=quay.io/ceph/ceph:v19, name=angry_gates, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 10:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0fb069bf205e1a67658becc8bdcaecc8b9b273bc5feaf14953d6304dadca578-merged.mount: Deactivated successfully.
Oct  9 10:59:38 compute-0 podman[17179]: 2025-10-09 10:59:38.8186901 +0000 UTC m=+0.546879898 container remove 16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26 (image=quay.io/ceph/ceph:v19, name=angry_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:38 compute-0 systemd[1]: libpod-conmon-16e4e1dfa1b4f82914bb4350125ac20d2df07effa3347ca6b2e7fcbde7abaa26.scope: Deactivated successfully.
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:39 compute-0 python3[17255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.207733017 +0000 UTC m=+0.036849820 container create 195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71 (image=quay.io/ceph/ceph:v19, name=elastic_agnesi, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 10:59:39 compute-0 systemd[1]: Started libpod-conmon-195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71.scope.
Oct  9 10:59:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7741b1b0094ea4c3f6f5d28934aa0aa99d0cace90f134664dc0c7fa8741c4e09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7741b1b0094ea4c3f6f5d28934aa0aa99d0cace90f134664dc0c7fa8741c4e09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7741b1b0094ea4c3f6f5d28934aa0aa99d0cace90f134664dc0c7fa8741c4e09/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.266883322 +0000 UTC m=+0.096000145 container init 195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71 (image=quay.io/ceph/ceph:v19, name=elastic_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.272818149 +0000 UTC m=+0.101934952 container start 195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71 (image=quay.io/ceph/ceph:v19, name=elastic_agnesi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.278795576 +0000 UTC m=+0.107912379 container attach 195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71 (image=quay.io/ceph/ceph:v19, name=elastic_agnesi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.193780525 +0000 UTC m=+0.022897348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:39 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  9 10:59:39 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:39 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:39 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 01cf04fc-9559-485e-b21b-bfd68d5ef424 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009976387s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.302444458s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009521484s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.302017212s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009976387s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302444458s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009521484s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302017212s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.010087967s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.302627563s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.010087967s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302627563s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009572029s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.302207947s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009183884s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301849365s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009527206s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.302223206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009183884s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301849365s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009527206s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302223206s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008769035s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301506042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008769035s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301506042s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009572029s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302207947s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008671761s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301506042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008671761s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301506042s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008648872s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301521301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008648872s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301521301s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008500099s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301445007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008440971s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301406860s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008500099s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301445007s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008269310s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301254272s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008475304s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301460266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008269310s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301254272s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008475304s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301460266s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009160042s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.302230835s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008208275s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301284790s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008208275s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301284790s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008113861s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301223755s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008113861s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301223755s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008237839s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.301399231s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29 pruub=10.205020905s) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active pruub 62.498195648s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008237839s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301399231s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.007522583s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300765991s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.007522583s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300765991s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29 pruub=10.995273590s) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 active pruub 63.288612366s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.007533073s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300926208s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.007533073s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300926208s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006925583s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300476074s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.009160042s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302230835s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006925583s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300476074s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.008440971s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301406860s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006767273s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300468445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006767273s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300468445s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006715775s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300468445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006715775s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300468445s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006201744s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300025940s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006201744s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300025940s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006336212s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.300201416s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.006336212s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300201416s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005688667s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.299667358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005688667s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299667358s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005211830s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.299232483s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005211830s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299232483s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005348206s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.299453735s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.004918098s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.299034119s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005348206s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299453735s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.004918098s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299034119s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.004501343s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.298690796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.004501343s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.298690796s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.004537582s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.298759460s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.004537582s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.298759460s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005562782s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.299865723s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.005562782s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299865723s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.000967026s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 active pruub 67.295349121s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=29 pruub=15.000967026s) [] r=-1 lpr=29 pi=[27,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.295349121s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29 pruub=10.205020905s) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown pruub 62.498195648s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:39 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29 pruub=10.995273590s) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.288612366s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:39 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1848230378; not ready for session (expect reconnect)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:39 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Oct  9 10:59:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2767728598' entity='client.admin' 
Oct  9 10:59:39 compute-0 elastic_agnesi[17297]: set ssl_option
Oct  9 10:59:39 compute-0 systemd[1]: libpod-195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71.scope: Deactivated successfully.
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.732909865 +0000 UTC m=+0.562026668 container died 195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71 (image=quay.io/ceph/ceph:v19, name=elastic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7741b1b0094ea4c3f6f5d28934aa0aa99d0cace90f134664dc0c7fa8741c4e09-merged.mount: Deactivated successfully.
Oct  9 10:59:39 compute-0 podman[17279]: 2025-10-09 10:59:39.771420509 +0000 UTC m=+0.600537312 container remove 195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71 (image=quay.io/ceph/ceph:v19, name=elastic_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:39 compute-0 systemd[1]: libpod-conmon-195f6a88cee0130d3bc1ed4d0ef889478f400422b99ae07f9a47582d172cde71.scope: Deactivated successfully.
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2229510725' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2229510725' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:39 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 10:59:40 compute-0 python3[17358]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.107822384 +0000 UTC m=+0.040951745 container create 799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e (image=quay.io/ceph/ceph:v19, name=priceless_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:40 compute-0 systemd[1]: Started libpod-conmon-799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e.scope.
Oct  9 10:59:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e14a0553c43b165f89de37ab6c5584f6edd705d269d4b63c4221cf70160513f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e14a0553c43b165f89de37ab6c5584f6edd705d269d4b63c4221cf70160513f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e14a0553c43b165f89de37ab6c5584f6edd705d269d4b63c4221cf70160513f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.089913932 +0000 UTC m=+0.023043313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.193210449 +0000 UTC m=+0.126339840 container init 799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e (image=quay.io/ceph/ceph:v19, name=priceless_borg, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.203661244 +0000 UTC m=+0.136790625 container start 799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e (image=quay.io/ceph/ceph:v19, name=priceless_borg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.208560226 +0000 UTC m=+0.141689647 container attach 799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e (image=quay.io/ceph/ceph:v19, name=priceless_borg, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 priceless_borg[17375]: Scheduled rgw.rgw update...
Oct  9 10:59:40 compute-0 priceless_borg[17375]: Scheduled ingress.rgw.default update...
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  9 10:59:40 compute-0 systemd[1]: libpod-799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e.scope: Deactivated successfully.
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.628722442 +0000 UTC m=+0.561851803 container died 799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e (image=quay.io/ceph/ceph:v19, name=priceless_borg, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1848230378; not ready for session (expect reconnect)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev deea0ceb-701d-4731-a259-827b4cb5ea80 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e14a0553c43b165f89de37ab6c5584f6edd705d269d4b63c4221cf70160513f-merged.mount: Deactivated successfully.
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 5d745d46-71cb-4744-beab-01b17dc3fa33 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 5d745d46-71cb-4744-beab-01b17dc3fa33 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 4eb4d131-d4d9-4253-b6c2-b75a76b3edd8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 4eb4d131-d4d9-4253-b6c2-b75a76b3edd8 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 4ad8f058-b30f-4e26-9505-d1aa30318f61 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 4ad8f058-b30f-4e26-9505-d1aa30318f61 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 596f0d1c-926b-4801-9ee6-90f890828d52 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 596f0d1c-926b-4801-9ee6-90f890828d52 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 01cf04fc-9559-485e-b21b-bfd68d5ef424 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 01cf04fc-9559-485e-b21b-bfd68d5ef424 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev deea0ceb-701d-4731-a259-827b4cb5ea80 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event deea0ceb-701d-4731-a259-827b4cb5ea80 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1e( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.10( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.11( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.13( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.12( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.12( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.14( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.17( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.16( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.17( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.8( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.b( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.6( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.7( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.2( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.6( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.4( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.3( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.f( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1d( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1c( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.19( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[5.19( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=29) [] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.10( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.11( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.12( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.17( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=29/30 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 systemd[6033]: Starting Mark boot as successful...
Oct  9 10:59:40 compute-0 podman[17359]: 2025-10-09 10:59:40.676995349 +0000 UTC m=+0.610124710 container remove 799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e (image=quay.io/ceph/ceph:v19, name=priceless_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.4( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 systemd[6033]: Finished Mark boot as successful.
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.7( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.16( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [0] r=0 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:40 compute-0 systemd[1]: libpod-conmon-799c305ba63c1154e22803e57c0ededc9d3b366c9f55f3979fe6e2c5b1b94d2e.scope: Deactivated successfully.
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:59:40 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v84: 131 pgs: 1 peering, 93 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2767728598' entity='client.admin' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: Saving service ingress.rgw.default spec with placement count:2
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:40 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:41 compute-0 python3[17488]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 10:59:41 compute-0 python3[17559]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007580.8301256-33853-93849544901632/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 10:59:41 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  9 10:59:41 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  9 10:59:41 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 11 completed events
Oct  9 10:59:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 10:59:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:41 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1848230378; not ready for session (expect reconnect)
Oct  9 10:59:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:41 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:41 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  9 10:59:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct  9 10:59:41 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct  9 10:59:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:41 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:41 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:41 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:41 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:41 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 10:59:42 compute-0 python3[17609]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.084394824 +0000 UTC m=+0.042680452 container create 9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b (image=quay.io/ceph/ceph:v19, name=compassionate_driscoll, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:42 compute-0 systemd[1]: Started libpod-conmon-9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b.scope.
Oct  9 10:59:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c273dd2052859eba4620b9dfb2aab50e393acc868154b06c89bfa6e12d081186/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c273dd2052859eba4620b9dfb2aab50e393acc868154b06c89bfa6e12d081186/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c273dd2052859eba4620b9dfb2aab50e393acc868154b06c89bfa6e12d081186/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.156709546 +0000 UTC m=+0.114995194 container init 9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b (image=quay.io/ceph/ceph:v19, name=compassionate_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.062392327 +0000 UTC m=+0.020678005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.165077013 +0000 UTC m=+0.123362641 container start 9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b (image=quay.io/ceph/ceph:v19, name=compassionate_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.168425213 +0000 UTC m=+0.126710841 container attach 9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b (image=quay.io/ceph/ceph:v19, name=compassionate_driscoll, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.8M
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.8M
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  9 10:59:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service node-exporter spec with placement *
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 compassionate_driscoll[17625]: Scheduled node-exporter update...
Oct  9 10:59:42 compute-0 compassionate_driscoll[17625]: Scheduled grafana update...
Oct  9 10:59:42 compute-0 compassionate_driscoll[17625]: Scheduled prometheus update...
Oct  9 10:59:42 compute-0 compassionate_driscoll[17625]: Scheduled alertmanager update...
Oct  9 10:59:42 compute-0 systemd[1]: libpod-9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b.scope: Deactivated successfully.
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.584513494 +0000 UTC m=+0.542799122 container died 9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b (image=quay.io/ceph/ceph:v19, name=compassionate_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c273dd2052859eba4620b9dfb2aab50e393acc868154b06c89bfa6e12d081186-merged.mount: Deactivated successfully.
Oct  9 10:59:42 compute-0 podman[17610]: 2025-10-09 10:59:42.621105964 +0000 UTC m=+0.579391592 container remove 9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b (image=quay.io/ceph/ceph:v19, name=compassionate_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 10:59:42 compute-0 systemd[1]: libpod-conmon-9d6a936ac931ed55101823dc6a25658bf7f9eb12f52fa312199db5b9d419144b.scope: Deactivated successfully.
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1848230378; not ready for session (expect reconnect)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v86: 193 pgs: 1 peering, 155 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378] boot
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct  9 10:59:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 10:59:42 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.19( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850718498s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299865723s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.19( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850692749s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299865723s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849351883s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.298759460s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849324226s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.298759460s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849143028s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.298690796s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849119186s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.298690796s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849383354s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299034119s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849366188s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299034119s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.846273422s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.295349121s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.845629692s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.295349121s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849630356s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299453735s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849622726s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299453735s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849181175s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299232483s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850095749s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300201416s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850077629s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300201416s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849169731s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299232483s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849848747s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300025940s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849838257s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300025940s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849667549s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299667358s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850210190s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300468445s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850198746s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300468445s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850084305s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300468445s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850072861s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300468445s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849414825s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.299667358s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849983215s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300476074s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849973679s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300476074s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.3( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850335121s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300926208s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.5( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850325584s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300926208s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.5( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=8.870211601s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active pruub 64.320816040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=32 pruub=7.837944984s) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.288612366s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850084305s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300765991s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=32 pruub=7.837935925s) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.288612366s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850071907s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.300765991s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850585938s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301399231s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.6( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=27/28 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850572586s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301399231s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.6( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.3( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.c( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850247383s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301284790s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850234985s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301284790s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.d( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.851065636s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302230835s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.851052284s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302230835s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.a( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849837303s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301254272s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849817276s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301254272s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.b( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.850297928s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301223755s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849653244s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301223755s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849738121s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301406860s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.8( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.8( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849723816s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301406860s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849575996s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301460266s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849554062s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301460266s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849245071s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301445007s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849203110s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301445007s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849245071s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301506042s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849222183s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301506042s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849113464s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301521301s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849099159s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301521301s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.14( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.14( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.848981857s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301506042s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.17( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.848972321s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301506042s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.17( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849613190s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302223206s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849606514s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302223206s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849138260s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301849365s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.13( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849128723s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.301849365s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.13( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.12( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.12( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849183083s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302017212s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849171638s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302017212s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849666595s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302627563s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849657059s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302627563s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849161148s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302207947s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849152565s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302207947s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1e( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849323273s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302444458s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.849313736s) [2] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.302444458s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=16/17 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=-1 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=8.870211601s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown pruub 64.320816040s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Unable to set osd_memory_target on compute-2 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Saving service node-exporter spec with placement *
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Saving service grafana spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Saving service prometheus spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: Saving service alertmanager spec with placement compute-0;count:1
Oct  9 10:59:42 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:42 compute-0 ceph-mon[4705]: osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378] boot
Oct  9 10:59:43 compute-0 python3[18005]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.160444062 +0000 UTC m=+0.038769723 container create 276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6 (image=quay.io/ceph/ceph:v19, name=awesome_goldwasser, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:43 compute-0 systemd[1]: Started libpod-conmon-276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6.scope.
Oct  9 10:59:43 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60730ea69bc98cb1b8792d534330161cc412b6378533e8b00e849f1664cc7d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60730ea69bc98cb1b8792d534330161cc412b6378533e8b00e849f1664cc7d8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60730ea69bc98cb1b8792d534330161cc412b6378533e8b00e849f1664cc7d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.234307634 +0000 UTC m=+0.112633295 container init 276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6 (image=quay.io/ceph/ceph:v19, name=awesome_goldwasser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.141792255 +0000 UTC m=+0.020117946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.240501729 +0000 UTC m=+0.118827390 container start 276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6 (image=quay.io/ceph/ceph:v19, name=awesome_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.243342983 +0000 UTC m=+0.121668644 container attach 276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6 (image=quay.io/ceph/ceph:v19, name=awesome_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:43 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  9 10:59:43 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1352278890' entity='client.admin' 
Oct  9 10:59:43 compute-0 systemd[1]: libpod-276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6.scope: Deactivated successfully.
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.607621241 +0000 UTC m=+0.485946892 container died 276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6 (image=quay.io/ceph/ceph:v19, name=awesome_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e60730ea69bc98cb1b8792d534330161cc412b6378533e8b00e849f1664cc7d8-merged.mount: Deactivated successfully.
Oct  9 10:59:43 compute-0 podman[18062]: 2025-10-09 10:59:43.645457102 +0000 UTC m=+0.523782763 container remove 276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6 (image=quay.io/ceph/ceph:v19, name=awesome_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:43 compute-0 systemd[1]: libpod-conmon-276f273ddf10c75bc825c64e199b8244771a33516bde580ddde7d102472866a6.scope: Deactivated successfully.
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  9 10:59:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct  9 10:59:43 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:43 compute-0 ceph-mon[4705]: OSD bench result of 7809.019048 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 10:59:43 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:43 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:43 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:59:43 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1352278890' entity='client.admin' 
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.857109722 +0000 UTC m=+0.041427561 container create f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:43 compute-0 systemd[1]: Started libpod-conmon-f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553.scope.
Oct  9 10:59:43 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.92747506 +0000 UTC m=+0.111792929 container init f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.93414679 +0000 UTC m=+0.118464629 container start f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_gates, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.839108256 +0000 UTC m=+0.023426125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:43 compute-0 clever_gates[18315]: 167 167
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.937322525 +0000 UTC m=+0.121640364 container attach f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_gates, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:43 compute-0 systemd[1]: libpod-f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553.scope: Deactivated successfully.
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.937895424 +0000 UTC m=+0.122213263 container died f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_gates, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 10:59:43 compute-0 python3[18298]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a714f379f365a77bdebaf307c78f5e79485b576473c63bdedd8ca8e8fc7897e5-merged.mount: Deactivated successfully.
Oct  9 10:59:43 compute-0 podman[18299]: 2025-10-09 10:59:43.97133499 +0000 UTC m=+0.155652829 container remove f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:43 compute-0 systemd[1]: libpod-conmon-f9d98be5f93b9800cb0c62ca87e4e9ce9d7d3da18041c89896328fa7f90d0553.scope: Deactivated successfully.
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:44.008600632 +0000 UTC m=+0.038088091 container create 815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419 (image=quay.io/ceph/ceph:v19, name=epic_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:44 compute-0 systemd[1]: Started libpod-conmon-815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419.scope.
Oct  9 10:59:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7740a085c6aea73f07c0580d70c639b0cf89fb28a76ff68e9a1ae2292c4b2c98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7740a085c6aea73f07c0580d70c639b0cf89fb28a76ff68e9a1ae2292c4b2c98/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7740a085c6aea73f07c0580d70c639b0cf89fb28a76ff68e9a1ae2292c4b2c98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:44.081729801 +0000 UTC m=+0.111217280 container init 815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419 (image=quay.io/ceph/ceph:v19, name=epic_moore, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:43.99282794 +0000 UTC m=+0.022315419 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:44.088021029 +0000 UTC m=+0.117508488 container start 815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419 (image=quay.io/ceph/ceph:v19, name=epic_moore, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:44.092918701 +0000 UTC m=+0.122406180 container attach 815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419 (image=quay.io/ceph/ceph:v19, name=epic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.124137543 +0000 UTC m=+0.038149912 container create 71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:59:44 compute-0 systemd[1]: Started libpod-conmon-71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab.scope.
Oct  9 10:59:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8155311c4306007fbf8dd482e1729556d48b423a8730d832b431279a874a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8155311c4306007fbf8dd482e1729556d48b423a8730d832b431279a874a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8155311c4306007fbf8dd482e1729556d48b423a8730d832b431279a874a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8155311c4306007fbf8dd482e1729556d48b423a8730d832b431279a874a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8155311c4306007fbf8dd482e1729556d48b423a8730d832b431279a874a56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.18331206 +0000 UTC m=+0.097324449 container init 71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.191078118 +0000 UTC m=+0.105090487 container start 71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.194698617 +0000 UTC m=+0.108711016 container attach 71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_perlman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.107194983 +0000 UTC m=+0.021207372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Oct  9 10:59:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/36201585' entity='client.admin' 
Oct  9 10:59:44 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct  9 10:59:44 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct  9 10:59:44 compute-0 systemd[1]: libpod-815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419.scope: Deactivated successfully.
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:44.455058407 +0000 UTC m=+0.484545866 container died 815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419 (image=quay.io/ceph/ceph:v19, name=epic_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7740a085c6aea73f07c0580d70c639b0cf89fb28a76ff68e9a1ae2292c4b2c98-merged.mount: Deactivated successfully.
Oct  9 10:59:44 compute-0 podman[18327]: 2025-10-09 10:59:44.491886336 +0000 UTC m=+0.521373795 container remove 815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419 (image=quay.io/ceph/ceph:v19, name=epic_moore, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:44 compute-0 systemd[1]: libpod-conmon-815ec0f22c0522dc2b8a264e3e384aa8de21e596bc75862182257b3e8fd18419.scope: Deactivated successfully.
Oct  9 10:59:44 compute-0 intelligent_perlman[18372]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:59:44 compute-0 intelligent_perlman[18372]: --> All data devices are unavailable
Oct  9 10:59:44 compute-0 systemd[1]: libpod-71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab.scope: Deactivated successfully.
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.527389829 +0000 UTC m=+0.441402198 container died 71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_perlman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d8155311c4306007fbf8dd482e1729556d48b423a8730d832b431279a874a56-merged.mount: Deactivated successfully.
Oct  9 10:59:44 compute-0 podman[18355]: 2025-10-09 10:59:44.567180625 +0000 UTC m=+0.481192994 container remove 71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_perlman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  9 10:59:44 compute-0 systemd[1]: libpod-conmon-71d4a9b130145236d5811cde8b95a6e542ab90f422289f2063d77f9700ea8cab.scope: Deactivated successfully.
Oct  9 10:59:44 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v89: 193 pgs: 65 peering, 62 unknown, 66 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  9 10:59:44 compute-0 python3[18468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:44 compute-0 podman[18506]: 2025-10-09 10:59:44.83488398 +0000 UTC m=+0.045427344 container create cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd (image=quay.io/ceph/ceph:v19, name=heuristic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 10:59:44 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/36201585' entity='client.admin' 
Oct  9 10:59:44 compute-0 systemd[1]: Started libpod-conmon-cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd.scope.
Oct  9 10:59:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb950db77fd6bad8fe37cfc70e9b3b193da6ceb6ad7834dd9f20d0d2f4b7478/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb950db77fd6bad8fe37cfc70e9b3b193da6ceb6ad7834dd9f20d0d2f4b7478/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb950db77fd6bad8fe37cfc70e9b3b193da6ceb6ad7834dd9f20d0d2f4b7478/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:44 compute-0 podman[18506]: 2025-10-09 10:59:44.903886971 +0000 UTC m=+0.114430355 container init cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd (image=quay.io/ceph/ceph:v19, name=heuristic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:44 compute-0 podman[18506]: 2025-10-09 10:59:44.812732087 +0000 UTC m=+0.023275471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:44 compute-0 podman[18506]: 2025-10-09 10:59:44.910401707 +0000 UTC m=+0.120945071 container start cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd (image=quay.io/ceph/ceph:v19, name=heuristic_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:44 compute-0 podman[18506]: 2025-10-09 10:59:44.913862781 +0000 UTC m=+0.124406135 container attach cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd (image=quay.io/ceph/ceph:v19, name=heuristic_shirley, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.06984081 +0000 UTC m=+0.036701575 container create 1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_fermi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:45 compute-0 systemd[1]: Started libpod-conmon-1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533.scope.
Oct  9 10:59:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.137206488 +0000 UTC m=+0.104067253 container init 1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_fermi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.144394736 +0000 UTC m=+0.111255501 container start 1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:45 compute-0 pedantic_fermi[18602]: 167 167
Oct  9 10:59:45 compute-0 systemd[1]: libpod-1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533.scope: Deactivated successfully.
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.054104299 +0000 UTC m=+0.020965084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.150334252 +0000 UTC m=+0.117195047 container attach 1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.151020595 +0000 UTC m=+0.117881360 container died 1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-592c181de90604467dacb9028c304277f06d9ecd440371b6e2dfe653a7c11155-merged.mount: Deactivated successfully.
Oct  9 10:59:45 compute-0 podman[18585]: 2025-10-09 10:59:45.187070337 +0000 UTC m=+0.153931102 container remove 1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_fermi, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:45 compute-0 systemd[1]: libpod-conmon-1584c3df1596b03f117dc8c1d1344f848b1d0ba6b2cbea9fffe5b6d93b610533.scope: Deactivated successfully.
Oct  9 10:59:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Oct  9 10:59:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1390179226' entity='client.admin' 
Oct  9 10:59:45 compute-0 systemd[1]: libpod-cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd.scope: Deactivated successfully.
Oct  9 10:59:45 compute-0 podman[18506]: 2025-10-09 10:59:45.312231796 +0000 UTC m=+0.522775160 container died cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd (image=quay.io/ceph/ceph:v19, name=heuristic_shirley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cb950db77fd6bad8fe37cfc70e9b3b193da6ceb6ad7834dd9f20d0d2f4b7478-merged.mount: Deactivated successfully.
Oct  9 10:59:45 compute-0 podman[18506]: 2025-10-09 10:59:45.348604709 +0000 UTC m=+0.559148073 container remove cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd (image=quay.io/ceph/ceph:v19, name=heuristic_shirley, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.356127928 +0000 UTC m=+0.047433110 container create 1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_sammet, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:45 compute-0 systemd[1]: libpod-conmon-cad69b518fab7ff1e6eafdeacf4a01534d4d8cd0ee653bf65b9b59118d75b0cd.scope: Deactivated successfully.
Oct  9 10:59:45 compute-0 systemd[1]: Started libpod-conmon-1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b.scope.
Oct  9 10:59:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e22f65b6b59203176f01ef57f4dd1577de8eb352bdff1088bef9d2e4d8f8875/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e22f65b6b59203176f01ef57f4dd1577de8eb352bdff1088bef9d2e4d8f8875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e22f65b6b59203176f01ef57f4dd1577de8eb352bdff1088bef9d2e4d8f8875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e22f65b6b59203176f01ef57f4dd1577de8eb352bdff1088bef9d2e4d8f8875/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.418589174 +0000 UTC m=+0.109894376 container init 1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:45 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.329858419 +0000 UTC m=+0.021163621 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.427681185 +0000 UTC m=+0.118986367 container start 1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 10:59:45 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.430793548 +0000 UTC m=+0.122098730 container attach 1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_sammet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:45 compute-0 elated_sammet[18658]: {
Oct  9 10:59:45 compute-0 elated_sammet[18658]:    "0": [
Oct  9 10:59:45 compute-0 elated_sammet[18658]:        {
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "devices": [
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "/dev/loop3"
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            ],
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "lv_name": "ceph_lv0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "lv_size": "21470642176",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e990987d-9393-5e96-99ae-9e3a3319f191,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0ea02d81-16d9-4b32-9888-cc7ebc83243e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "lv_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "name": "ceph_lv0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "tags": {
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.block_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.cluster_fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.cluster_name": "ceph",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.crush_device_class": "",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.encrypted": "0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.osd_fsid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.osd_id": "0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.type": "block",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.vdo": "0",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:                "ceph.with_tpm": "0"
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            },
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "type": "block",
Oct  9 10:59:45 compute-0 elated_sammet[18658]:            "vg_name": "ceph_vg0"
Oct  9 10:59:45 compute-0 elated_sammet[18658]:        }
Oct  9 10:59:45 compute-0 elated_sammet[18658]:    ]
Oct  9 10:59:45 compute-0 elated_sammet[18658]: }
Oct  9 10:59:45 compute-0 systemd[1]: libpod-1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b.scope: Deactivated successfully.
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.710321122 +0000 UTC m=+0.401626324 container died 1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e22f65b6b59203176f01ef57f4dd1577de8eb352bdff1088bef9d2e4d8f8875-merged.mount: Deactivated successfully.
Oct  9 10:59:45 compute-0 podman[18629]: 2025-10-09 10:59:45.759999065 +0000 UTC m=+0.451304247 container remove 1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:45 compute-0 systemd[1]: libpod-conmon-1a377a891737b3a4b366a206d16031f10a3976207bf87c43aa15c9a6d9f51a8b.scope: Deactivated successfully.
Oct  9 10:59:45 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1390179226' entity='client.admin' 
Oct  9 10:59:45 compute-0 python3[18706]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.262165353 +0000 UTC m=+0.040988017 container create 2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:59:46 compute-0 systemd[1]: Started libpod-conmon-2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89.scope.
Oct  9 10:59:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.246399981 +0000 UTC m=+0.025222665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.354616551 +0000 UTC m=+0.133439245 container init 2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.361084684 +0000 UTC m=+0.139907348 container start 2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.363865756 +0000 UTC m=+0.142688420 container attach 2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:46 compute-0 great_nightingale[18850]: 167 167
Oct  9 10:59:46 compute-0 systemd[1]: libpod-2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89.scope: Deactivated successfully.
Oct  9 10:59:46 compute-0 conmon[18850]: conmon 2a9266c3412d306caaed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89.scope/container/memory.events
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.369489552 +0000 UTC m=+0.148312216 container died 2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1539dadd3a677fb7a96c21190b376b40fd58a37fa736a1e5c66f09c6fd07ce0-merged.mount: Deactivated successfully.
Oct  9 10:59:46 compute-0 podman[18808]: 2025-10-09 10:59:46.404288413 +0000 UTC m=+0.183111077 container remove 2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 10:59:46 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct  9 10:59:46 compute-0 python3[18847]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.izrudc/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:46 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct  9 10:59:46 compute-0 systemd[1]: libpod-conmon-2a9266c3412d306caaed3d3987d0e26e7efacba991ba1392aadfb07fa2de8e89.scope: Deactivated successfully.
Oct  9 10:59:46 compute-0 podman[18866]: 2025-10-09 10:59:46.465724425 +0000 UTC m=+0.038437802 container create be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b (image=quay.io/ceph/ceph:v19, name=cranky_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 10:59:46 compute-0 systemd[1]: Started libpod-conmon-be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b.scope.
Oct  9 10:59:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680e6cfe2522f8f966524d1387094996ea765bb5e3de93ae9aef049d3698d34f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680e6cfe2522f8f966524d1387094996ea765bb5e3de93ae9aef049d3698d34f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680e6cfe2522f8f966524d1387094996ea765bb5e3de93ae9aef049d3698d34f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 podman[18866]: 2025-10-09 10:59:46.542548496 +0000 UTC m=+0.115261883 container init be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b (image=quay.io/ceph/ceph:v19, name=cranky_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:46 compute-0 podman[18866]: 2025-10-09 10:59:46.448757604 +0000 UTC m=+0.021471001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:46 compute-0 podman[18866]: 2025-10-09 10:59:46.548668108 +0000 UTC m=+0.121381485 container start be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b (image=quay.io/ceph/ceph:v19, name=cranky_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:46 compute-0 podman[18866]: 2025-10-09 10:59:46.552267127 +0000 UTC m=+0.124980524 container attach be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b (image=quay.io/ceph/ceph:v19, name=cranky_hamilton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:46 compute-0 podman[18890]: 2025-10-09 10:59:46.55656566 +0000 UTC m=+0.042813348 container create d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct  9 10:59:46 compute-0 systemd[1]: Started libpod-conmon-d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5.scope.
Oct  9 10:59:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08e810fb106aa47276325e58d6c3fba0c729df00a84f3c7c986306eee047250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08e810fb106aa47276325e58d6c3fba0c729df00a84f3c7c986306eee047250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08e810fb106aa47276325e58d6c3fba0c729df00a84f3c7c986306eee047250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08e810fb106aa47276325e58d6c3fba0c729df00a84f3c7c986306eee047250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:46 compute-0 podman[18890]: 2025-10-09 10:59:46.535986419 +0000 UTC m=+0.022234127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:46 compute-0 podman[18890]: 2025-10-09 10:59:46.639549604 +0000 UTC m=+0.125797302 container init d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:59:46 compute-0 podman[18890]: 2025-10-09 10:59:46.646201554 +0000 UTC m=+0.132449242 container start d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 10:59:46 compute-0 podman[18890]: 2025-10-09 10:59:46.652634056 +0000 UTC m=+0.138881854 container attach d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:46 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  9 10:59:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.izrudc/server_addr}] v 0)
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1920736686' entity='client.admin' 
Oct  9 10:59:47 compute-0 systemd[1]: libpod-be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b.scope: Deactivated successfully.
Oct  9 10:59:47 compute-0 podman[18866]: 2025-10-09 10:59:47.190204405 +0000 UTC m=+0.762917782 container died be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b (image=quay.io/ceph/ceph:v19, name=cranky_hamilton, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-680e6cfe2522f8f966524d1387094996ea765bb5e3de93ae9aef049d3698d34f-merged.mount: Deactivated successfully.
Oct  9 10:59:47 compute-0 podman[18866]: 2025-10-09 10:59:47.260381566 +0000 UTC m=+0.833094943 container remove be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b (image=quay.io/ceph/ceph:v19, name=cranky_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:47 compute-0 systemd[1]: libpod-conmon-be2bfd4dbbd90efc6880cb3bd3654caae7861b351ba0482270bd9c55c112430b.scope: Deactivated successfully.
Oct  9 10:59:47 compute-0 lvm[19015]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:59:47 compute-0 lvm[19015]: VG ceph_vg0 finished
Oct  9 10:59:47 compute-0 determined_bose[18909]: {}
Oct  9 10:59:47 compute-0 systemd[1]: libpod-d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5.scope: Deactivated successfully.
Oct  9 10:59:47 compute-0 systemd[1]: libpod-d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5.scope: Consumed 1.085s CPU time.
Oct  9 10:59:47 compute-0 podman[18890]: 2025-10-09 10:59:47.349488413 +0000 UTC m=+0.835736101 container died d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08e810fb106aa47276325e58d6c3fba0c729df00a84f3c7c986306eee047250-merged.mount: Deactivated successfully.
Oct  9 10:59:47 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  9 10:59:47 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  9 10:59:47 compute-0 podman[18890]: 2025-10-09 10:59:47.393145787 +0000 UTC m=+0.879393475 container remove d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:47 compute-0 systemd[1]: libpod-conmon-d28ca764cf8a7a080342b6d1136ab8631e351c3dfefbf4fd87d725656fed56c5.scope: Deactivated successfully.
Oct  9 10:59:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:59:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:47 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 5931ad43-8e96-42ba-86b0-58d67e70fc5a (Updating rgw.rgw deployment (+3 -> 3))
Oct  9 10:59:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 10:59:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:47 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:47 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.klwwrz on compute-2
Oct  9 10:59:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.klwwrz on compute-2
Oct  9 10:59:48 compute-0 python3[19059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.rtiqvm/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:48 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1920736686' entity='client.admin' 
Oct  9 10:59:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 10:59:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 10:59:48 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.192089439 +0000 UTC m=+0.039234688 container create 1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad (image=quay.io/ceph/ceph:v19, name=crazy_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:59:48 compute-0 systemd[1]: Started libpod-conmon-1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad.scope.
Oct  9 10:59:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbda4d124f6485e5e894979069e2c722264d4a082eaaca4a006f6961a8626775/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbda4d124f6485e5e894979069e2c722264d4a082eaaca4a006f6961a8626775/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbda4d124f6485e5e894979069e2c722264d4a082eaaca4a006f6961a8626775/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.175665636 +0000 UTC m=+0.022810905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.272953915 +0000 UTC m=+0.120099184 container init 1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad (image=quay.io/ceph/ceph:v19, name=crazy_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.280462813 +0000 UTC m=+0.127608062 container start 1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad (image=quay.io/ceph/ceph:v19, name=crazy_hoover, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.285513469 +0000 UTC m=+0.132658738 container attach 1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad (image=quay.io/ceph/ceph:v19, name=crazy_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:59:48 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  9 10:59:48 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.rtiqvm/server_addr}] v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3282222437' entity='client.admin' 
Oct  9 10:59:48 compute-0 systemd[1]: libpod-1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad.scope: Deactivated successfully.
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.696879924 +0000 UTC m=+0.544025183 container died 1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad (image=quay.io/ceph/ceph:v19, name=crazy_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 10:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbda4d124f6485e5e894979069e2c722264d4a082eaaca4a006f6961a8626775-merged.mount: Deactivated successfully.
Oct  9 10:59:48 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 10:59:48 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:48 compute-0 podman[19060]: 2025-10-09 10:59:48.738852552 +0000 UTC m=+0.585997801 container remove 1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad (image=quay.io/ceph/ceph:v19, name=crazy_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 10:59:48 compute-0 systemd[1]: libpod-conmon-1e12882d07a98d37f6e8f100542b68462318f235c70f67478016d50c289b23ad.scope: Deactivated successfully.
Oct  9 10:59:49 compute-0 ceph-mon[4705]: Deploying daemon rgw.rgw.compute-2.klwwrz on compute-2
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3282222437' entity='client.admin' 
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:59:49 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:59:49 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  9 10:59:49 compute-0 python3[19139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.agiurv/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 10:59:49 compute-0 podman[19140]: 2025-10-09 10:59:49.587658974 +0000 UTC m=+0.040144699 container create 59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3 (image=quay.io/ceph/ceph:v19, name=gracious_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.vbxein on compute-1
Oct  9 10:59:49 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.vbxein on compute-1
Oct  9 10:59:49 compute-0 systemd[1]: Started libpod-conmon-59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3.scope.
Oct  9 10:59:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacfd54c7002949139d59b62da66dc6ab85309893c5c280c732d13f06c255428/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacfd54c7002949139d59b62da66dc6ab85309893c5c280c732d13f06c255428/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacfd54c7002949139d59b62da66dc6ab85309893c5c280c732d13f06c255428/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:49 compute-0 podman[19140]: 2025-10-09 10:59:49.658072213 +0000 UTC m=+0.110557958 container init 59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3 (image=quay.io/ceph/ceph:v19, name=gracious_lamarr, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  9 10:59:49 compute-0 podman[19140]: 2025-10-09 10:59:49.665045593 +0000 UTC m=+0.117531318 container start 59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3 (image=quay.io/ceph/ceph:v19, name=gracious_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 10:59:49 compute-0 podman[19140]: 2025-10-09 10:59:49.569966649 +0000 UTC m=+0.022452394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:49 compute-0 podman[19140]: 2025-10-09 10:59:49.669041766 +0000 UTC m=+0.121527491 container attach 59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3 (image=quay.io/ceph/ceph:v19, name=gracious_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[8.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.001162529s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342758179s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.001113892s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342758179s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.001091003s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342811584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.001050949s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342811584s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.120721817s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462493896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.120674133s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462493896s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000926971s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342803955s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000884056s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342803955s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000759125s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342742920s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.116608620s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.458610535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000743866s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342742920s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119418144s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.461433411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.116592407s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.458610535s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119401932s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.461433411s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000518799s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342643738s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000506401s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342643738s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119306564s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.461448669s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119285583s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.461448669s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000356674s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342590332s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000340462s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342590332s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000396729s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342689514s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=15.000380516s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342689514s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.998691559s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.341087341s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119313240s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.461730957s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.998675346s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.341087341s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119257927s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.461730957s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119506836s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462120056s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119490623s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462120056s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.996669769s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.339324951s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.996650696s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.339324951s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.995383263s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.338081360s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.995367050s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.338081360s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.995268822s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.338035583s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.995253563s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.338035583s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.995308876s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.338066101s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119277954s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462142944s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.995193481s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.338066101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119297028s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462203979s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119241714s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462142944s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119286537s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462203979s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119238853s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462173462s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119221687s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462173462s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.990011215s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.333023071s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989993095s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.333023071s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119163513s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462234497s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119149208s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462234497s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989881516s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.333007812s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119186401s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462318420s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119172096s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462318420s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989862442s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.333007812s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988962173s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.332176208s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988945007s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.332176208s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989733696s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.333030701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118514061s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.461814880s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989714622s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.333030701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118497849s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.461814880s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989693642s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.333030701s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.989663124s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.333030701s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119067192s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462493896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.999404907s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.342842102s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.119052887s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462493896s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118864059s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462379456s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118841171s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462387085s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118844032s) [1] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462379456s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988569260s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.332138062s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988551140s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.332138062s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988505363s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.332122803s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988486290s) [1] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.332122803s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118786812s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462501526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118824005s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462387085s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118771553s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462501526s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988174438s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active pruub 77.331954956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.988156319s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.331954956s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118681908s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active pruub 72.462524414s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[6.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34 pruub=10.118666649s) [2] r=-1 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.462524414s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34 pruub=14.999393463s) [2] r=-1 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.342842102s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.18( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.1e( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.3( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.19( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.7( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.19( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.5( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.1e( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.2( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.1d( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.4( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.6( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.b( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.a( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.c( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.17( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[5.14( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct  9 10:59:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.19( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.13( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.10( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.b( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.e( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.8( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.9( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.e( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.6( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.1( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.4( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.6( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.3( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.4( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.2( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.9( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.1e( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.f( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.18( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[7.1b( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.1e( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 34 pg[2.1f( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.agiurv/server_addr}] v 0)
Oct  9 10:59:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1128491331' entity='client.admin' 
Oct  9 10:59:50 compute-0 systemd[1]: libpod-59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3.scope: Deactivated successfully.
Oct  9 10:59:50 compute-0 podman[19140]: 2025-10-09 10:59:50.151815682 +0000 UTC m=+0.604301437 container died 59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3 (image=quay.io/ceph/ceph:v19, name=gracious_lamarr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dacfd54c7002949139d59b62da66dc6ab85309893c5c280c732d13f06c255428-merged.mount: Deactivated successfully.
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:50 compute-0 ceph-mon[4705]: Deploying daemon rgw.rgw.compute-1.vbxein on compute-1
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='client.? 192.168.122.102:0/2755107006' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 10:59:50 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1128491331' entity='client.admin' 
Oct  9 10:59:50 compute-0 podman[19140]: 2025-10-09 10:59:50.358127886 +0000 UTC m=+0.810613621 container remove 59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3 (image=quay.io/ceph/ceph:v19, name=gracious_lamarr, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 10:59:50 compute-0 systemd[1]: libpod-conmon-59df4b5f0c085a5c3e4cbe54ca22695cf091e9b8dcfb2b9df01d2322d69ae8a3.scope: Deactivated successfully.
Oct  9 10:59:50 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct  9 10:59:50 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct  9 10:59:50 compute-0 python3[19218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  9 10:59:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  9 10:59:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  9 10:59:50 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.1e( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.1f( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.1f( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.19( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.1e( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.1b( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.1d( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.18( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.4( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.1e( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.9( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.2( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.5( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.1( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.6( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.4( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.3( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.2( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.3( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.6( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.6( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.4( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.7( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.1( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.6( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.e( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.b( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.f( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.8( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.c( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.1e( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.a( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.19( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.9( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.13( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[2.e( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [0] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.18( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.b( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.17( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[5.14( empty local-lis/les=34/35 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.17( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.19( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[7.10( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [0] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[3.12( empty local-lis/les=34/35 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34) [0] r=0 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 35 pg[8.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:50 compute-0 podman[19219]: 2025-10-09 10:59:50.734391069 +0000 UTC m=+0.057321776 container create a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:50 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v94: 194 pgs: 1 creating+peering, 44 peering, 149 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  9 10:59:50 compute-0 systemd[1]: Started libpod-conmon-a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4.scope.
Oct  9 10:59:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:50 compute-0 podman[19219]: 2025-10-09 10:59:50.704827392 +0000 UTC m=+0.027758119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97723bf89eca74424e2f86360b0bed10ee50adf42d160c4794c56a659ab7adbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97723bf89eca74424e2f86360b0bed10ee50adf42d160c4794c56a659ab7adbe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97723bf89eca74424e2f86360b0bed10ee50adf42d160c4794c56a659ab7adbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:50 compute-0 podman[19219]: 2025-10-09 10:59:50.831834852 +0000 UTC m=+0.154765589 container init a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 10:59:50 compute-0 podman[19219]: 2025-10-09 10:59:50.839238587 +0000 UTC m=+0.162169294 container start a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 10:59:50 compute-0 podman[19219]: 2025-10-09 10:59:50.842920889 +0000 UTC m=+0.165851596 container attach a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/574181055' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:59:51 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.cjdyiw on compute-0
Oct  9 10:59:51 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.cjdyiw on compute-0
Oct  9 10:59:51 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.1f deep-scrub starts
Oct  9 10:59:51 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.1f deep-scrub ok
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/574181055' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 10:59:51 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:51 compute-0 ceph-mgr[4997]: [progress WARNING root] Starting Global Recovery Event,45 pgs not in active + clean state
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/574181055' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 10:59:51 compute-0 reverent_tesla[19234]: module 'dashboard' is already disabled
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.izrudc(active, since 2m), standbys: compute-2.agiurv, compute-1.rtiqvm
Oct  9 10:59:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 10:59:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 10:59:51 compute-0 systemd[1]: libpod-a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4.scope: Deactivated successfully.
Oct  9 10:59:51 compute-0 podman[19219]: 2025-10-09 10:59:51.764268169 +0000 UTC m=+1.087198876 container died a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 36 pg[9.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [0] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-97723bf89eca74424e2f86360b0bed10ee50adf42d160c4794c56a659ab7adbe-merged.mount: Deactivated successfully.
Oct  9 10:59:51 compute-0 podman[19219]: 2025-10-09 10:59:51.822501406 +0000 UTC m=+1.145432113 container remove a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 10:59:51 compute-0 systemd[1]: libpod-conmon-a030a71a2701f2c10032e9a42354fffb9bb5fbf9f36ee30aa15a2df5aaf810c4.scope: Deactivated successfully.
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.85470592 +0000 UTC m=+0.041007266 container create 2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_bohr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:51 compute-0 systemd[1]: Started libpod-conmon-2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839.scope.
Oct  9 10:59:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.930619711 +0000 UTC m=+0.116921107 container init 2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.836580452 +0000 UTC m=+0.022881808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.939364341 +0000 UTC m=+0.125665677 container start 2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 10:59:51 compute-0 nostalgic_bohr[19384]: 167 167
Oct  9 10:59:51 compute-0 systemd[1]: libpod-2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839.scope: Deactivated successfully.
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.9445034 +0000 UTC m=+0.130804766 container attach 2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.945845215 +0000 UTC m=+0.132146561 container died 2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_bohr, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct  9 10:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cac70e8d61a48917055dee1e07ef0a8112e3582121b58cfafdbccf30541455f9-merged.mount: Deactivated successfully.
Oct  9 10:59:51 compute-0 podman[19364]: 2025-10-09 10:59:51.98741919 +0000 UTC m=+0.173720526 container remove 2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_bohr, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 10:59:51 compute-0 systemd[1]: libpod-conmon-2b6e83a5c29ed8750f81d61610aa462426ba8dbe22bdf2b7f6c00e6b14d55839.scope: Deactivated successfully.
Oct  9 10:59:52 compute-0 systemd[1]: Reloading.
Oct  9 10:59:52 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:59:52 compute-0 python3[19429]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:52 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:59:52 compute-0 podman[19465]: 2025-10-09 10:59:52.184023422 +0000 UTC m=+0.038792584 container create 8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef (image=quay.io/ceph/ceph:v19, name=serene_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 10:59:52 compute-0 podman[19465]: 2025-10-09 10:59:52.169385278 +0000 UTC m=+0.024154460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:52 compute-0 systemd[1]: Started libpod-conmon-8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef.scope.
Oct  9 10:59:52 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  9 10:59:52 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  9 10:59:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fdd3c552a21266e8f2abfa032e241ef56ca953837610993d4991b87287d8c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fdd3c552a21266e8f2abfa032e241ef56ca953837610993d4991b87287d8c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fdd3c552a21266e8f2abfa032e241ef56ca953837610993d4991b87287d8c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 podman[19465]: 2025-10-09 10:59:52.368147171 +0000 UTC m=+0.222916383 container init 8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef (image=quay.io/ceph/ceph:v19, name=serene_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:52 compute-0 systemd[1]: Reloading.
Oct  9 10:59:52 compute-0 ceph-mon[4705]: Deploying daemon rgw.rgw.compute-0.cjdyiw on compute-0
Oct  9 10:59:52 compute-0 ceph-mon[4705]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 10:59:52 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 10:59:52 compute-0 ceph-mon[4705]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 10:59:52 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/574181055' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 10:59:52 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 10:59:52 compute-0 podman[19465]: 2025-10-09 10:59:52.377574713 +0000 UTC m=+0.232343875 container start 8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef (image=quay.io/ceph/ceph:v19, name=serene_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:59:52 compute-0 podman[19465]: 2025-10-09 10:59:52.381835624 +0000 UTC m=+0.236604786 container attach 8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef (image=quay.io/ceph/ceph:v19, name=serene_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 10:59:52 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:59:52 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:59:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:52 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.cjdyiw for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:59:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  9 10:59:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 10:59:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 10:59:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  9 10:59:52 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  9 10:59:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 37 pg[9.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [0] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:52 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v97: 195 pgs: 1 unknown, 1 creating+peering, 44 peering, 149 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  9 10:59:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct  9 10:59:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/731276261' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 10:59:52 compute-0 podman[19601]: 2025-10-09 10:59:52.887064213 +0000 UTC m=+0.039735715 container create cedb5afb2b4945dfd2cc2c864603f4522de2454cbdb1b79b4e8a383bd6479225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-0-cjdyiw, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c3af8b18ee16558ff83a240a16f5df21ff66d23d03eaed3734e05971da18c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c3af8b18ee16558ff83a240a16f5df21ff66d23d03eaed3734e05971da18c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c3af8b18ee16558ff83a240a16f5df21ff66d23d03eaed3734e05971da18c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c3af8b18ee16558ff83a240a16f5df21ff66d23d03eaed3734e05971da18c8/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.cjdyiw supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:52 compute-0 podman[19601]: 2025-10-09 10:59:52.947533253 +0000 UTC m=+0.100204765 container init cedb5afb2b4945dfd2cc2c864603f4522de2454cbdb1b79b4e8a383bd6479225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-0-cjdyiw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:59:52 compute-0 podman[19601]: 2025-10-09 10:59:52.954147291 +0000 UTC m=+0.106818793 container start cedb5afb2b4945dfd2cc2c864603f4522de2454cbdb1b79b4e8a383bd6479225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-0-cjdyiw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Oct  9 10:59:52 compute-0 bash[19601]: cedb5afb2b4945dfd2cc2c864603f4522de2454cbdb1b79b4e8a383bd6479225
Oct  9 10:59:52 compute-0 podman[19601]: 2025-10-09 10:59:52.869045967 +0000 UTC m=+0.021717499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:59:52 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.cjdyiw for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:59:53 compute-0 radosgw[19620]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  9 10:59:53 compute-0 radosgw[19620]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct  9 10:59:53 compute-0 radosgw[19620]: framework: beast
Oct  9 10:59:53 compute-0 radosgw[19620]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  9 10:59:53 compute-0 radosgw[19620]: init_numa not setting numa affinity
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 5931ad43-8e96-42ba-86b0-58d67e70fc5a (Updating rgw.rgw deployment (+3 -> 3))
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 5931ad43-8e96-42ba-86b0-58d67e70fc5a (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev c65cf4db-ac4d-4113-b648-b47b8da46a5e (Updating node-exporter deployment (+3 -> 3))
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 10:59:53 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  9 10:59:53 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/731276261' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 ceph-mon[4705]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct  9 10:59:53 compute-0 systemd[1]: Reloading.
Oct  9 10:59:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:59:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/731276261' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  1: '-n'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  2: 'mgr.compute-0.izrudc'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  3: '-f'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  4: '--setuser'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  5: 'ceph'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  6: '--setgroup'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  7: 'ceph'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr respawn  exe_path /proc/self/exe
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.izrudc(active, since 2m), standbys: compute-2.agiurv, compute-1.rtiqvm
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 10:59:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:53 compute-0 podman[19465]: 2025-10-09 10:59:53.781335079 +0000 UTC m=+1.636104241 container died 8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef (image=quay.io/ceph/ceph:v19, name=serene_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:59:53 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: libpod-8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 8 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 14 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 17 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 15 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 12 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setuser ceph since I am not root
Oct  9 10:59:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3fdd3c552a21266e8f2abfa032e241ef56ca953837610993d4991b87287d8c9-merged.mount: Deactivated successfully.
Oct  9 10:59:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setgroup ceph since I am not root
Oct  9 10:59:53 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 10:59:53 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: pidfile_write: ignore empty --pid-file
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 9 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 10 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 6 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 13 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 10.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 15.
Oct  9 10:59:53 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 14.
Oct  9 10:59:53 compute-0 podman[19465]: 2025-10-09 10:59:53.889779815 +0000 UTC m=+1.744548977 container remove 8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef (image=quay.io/ceph/ceph:v19, name=serene_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 17.
Oct  9 10:59:53 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 6.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 8.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 9.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 16.
Oct  9 10:59:53 compute-0 systemd[1]: libpod-conmon-8d12a2a0e0180d9068b9ad01e4125f9b48111140af838198cb9d065b463e2aef.scope: Deactivated successfully.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 12.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 13.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 18 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Session 11 logged out. Waiting for processes to exit.
Oct  9 10:59:53 compute-0 systemd-logind[846]: Removed session 11.
Oct  9 10:59:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'alerts'
Oct  9 10:59:53 compute-0 systemd[1]: Reloading.
Oct  9 10:59:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 10:59:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 10:59:54 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:54.028+0000 7f8bd78ea140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 10:59:54 compute-0 ceph-mgr[4997]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 10:59:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'balancer'
Oct  9 10:59:54 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:54.116+0000 7f8bd78ea140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 10:59:54 compute-0 ceph-mgr[4997]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 10:59:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'cephadm'
Oct  9 10:59:54 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 10:59:54 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  9 10:59:54 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  9 10:59:54 compute-0 python3[20440]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 10:59:54 compute-0 ceph-mon[4705]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 10:59:54 compute-0 ceph-mon[4705]: Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 10:59:54 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/731276261' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 10:59:54 compute-0 ceph-mon[4705]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:54 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:54 compute-0 ceph-mon[4705]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:54 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:54 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 10:59:54 compute-0 bash[20490]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct  9 10:59:54 compute-0 podman[20485]: 2025-10-09 10:59:54.403827386 +0000 UTC m=+0.048792905 container create db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f (image=quay.io/ceph/ceph:v19, name=admiring_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:59:54 compute-0 systemd[1]: Started libpod-conmon-db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f.scope.
Oct  9 10:59:54 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:59:54 compute-0 podman[20485]: 2025-10-09 10:59:54.378139557 +0000 UTC m=+0.023105106 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 10:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a449363bf3e65fb0d3bd54fda71bcb44706d0079fa58cecab24c238d5a441d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a449363bf3e65fb0d3bd54fda71bcb44706d0079fa58cecab24c238d5a441d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69a449363bf3e65fb0d3bd54fda71bcb44706d0079fa58cecab24c238d5a441d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:54 compute-0 podman[20485]: 2025-10-09 10:59:54.492995925 +0000 UTC m=+0.137961474 container init db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f (image=quay.io/ceph/ceph:v19, name=admiring_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:59:54 compute-0 podman[20485]: 2025-10-09 10:59:54.500619947 +0000 UTC m=+0.145585466 container start db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f (image=quay.io/ceph/ceph:v19, name=admiring_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:54 compute-0 podman[20485]: 2025-10-09 10:59:54.503695239 +0000 UTC m=+0.148660768 container attach db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f (image=quay.io/ceph/ceph:v19, name=admiring_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:59:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  9 10:59:54 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 10:59:54 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 10:59:54 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 10:59:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  9 10:59:54 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  9 10:59:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'crash'
Oct  9 10:59:54 compute-0 bash[20490]: Getting image source signatures
Oct  9 10:59:54 compute-0 bash[20490]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct  9 10:59:54 compute-0 bash[20490]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct  9 10:59:54 compute-0 bash[20490]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:55.050+0000 7f8bd78ea140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'dashboard'
Oct  9 10:59:55 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct  9 10:59:55 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct  9 10:59:55 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 10:59:55 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 10:59:55 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 10:59:55 compute-0 bash[20490]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct  9 10:59:55 compute-0 bash[20490]: Writing manifest to image destination
Oct  9 10:59:55 compute-0 podman[20490]: 2025-10-09 10:59:55.590789345 +0000 UTC m=+1.219415964 container create 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:59:55 compute-0 podman[20490]: 2025-10-09 10:59:55.57133576 +0000 UTC m=+1.199962399 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  9 10:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bee001972612ab5444561052d9682f385a6d13cdb68ae28a2ecae6049500723/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  9 10:59:55 compute-0 podman[20490]: 2025-10-09 10:59:55.643713293 +0000 UTC m=+1.272339912 container init 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:59:55 compute-0 podman[20490]: 2025-10-09 10:59:55.649537847 +0000 UTC m=+1.278164466 container start 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:59:55 compute-0 bash[20490]: 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.659Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.659Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  9 10:59:55 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.662Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.662Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.663Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.663Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.663Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=arp
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=bcache
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=bonding
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=cpu
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.664Z caller=node_exporter.go:117 level=info collector=dmi
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=edac
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=entropy
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=filefd
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=netclass
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=netdev
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=netstat
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=nfs
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=nvme
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.665Z caller=node_exporter.go:117 level=info collector=os
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=pressure
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=rapl
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=selinux
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=softnet
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=stat
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=textfile
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=time
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=uname
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=xfs
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.666Z caller=node_exporter.go:117 level=info collector=zfs
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.667Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[20616]: ts=2025-10-09T10:59:55.667Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'devicehealth'
Oct  9 10:59:55 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct  9 10:59:55 compute-0 systemd[1]: session-18.scope: Consumed 23.958s CPU time.
Oct  9 10:59:55 compute-0 systemd-logind[846]: Removed session 18.
Oct  9 10:59:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:55.768+0000 7f8bd78ea140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  from numpy import show_config as show_numpy_config
Oct  9 10:59:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:55.959+0000 7f8bd78ea140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 10:59:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'influx'
Oct  9 10:59:56 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:56.049+0000 7f8bd78ea140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'insights'
Oct  9 10:59:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  9 10:59:56 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  9 10:59:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 10:59:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 40 pg[11.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:59:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 10:59:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 10:59:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'iostat'
Oct  9 10:59:56 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:56.204+0000 7f8bd78ea140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'k8sevents'
Oct  9 10:59:56 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  9 10:59:56 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  9 10:59:56 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mon[4705]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mon[4705]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'localpool'
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 10:59:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mirroring'
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'nfs'
Oct  9 10:59:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 10:59:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  9 10:59:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 10:59:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 41 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:59:57 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Oct  9 10:59:57 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Oct  9 10:59:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:57.258+0000 7f8bd78ea140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'orchestrator'
Oct  9 10:59:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 10:59:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:57.542+0000 7f8bd78ea140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 10:59:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:57.627+0000 7f8bd78ea140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_support'
Oct  9 10:59:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:57.698+0000 7f8bd78ea140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 10:59:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:57.777+0000 7f8bd78ea140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'progress'
Oct  9 10:59:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:57.853+0000 7f8bd78ea140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 10:59:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'prometheus'
Oct  9 10:59:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  9 10:59:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 10:59:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 10:59:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 10:59:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  9 10:59:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  9 10:59:58 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Oct  9 10:59:58 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Oct  9 10:59:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:58.223+0000 7f8bd78ea140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rbd_support'
Oct  9 10:59:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:58.329+0000 7f8bd78ea140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'restful'
Oct  9 10:59:58 compute-0 radosgw[19620]: v1 topic migration: starting v1 topic migration..
Oct  9 10:59:58 compute-0 radosgw[19620]: LDAP not started since no server URIs were provided in the configuration.
Oct  9 10:59:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-0-cjdyiw[19616]: 2025-10-09T10:59:58.330+0000 7efe0a35c980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  9 10:59:58 compute-0 radosgw[19620]: v1 topic migration: finished v1 topic migration
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: framework: beast
Oct  9 10:59:58 compute-0 radosgw[19620]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  9 10:59:58 compute-0 radosgw[19620]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  9 10:59:58 compute-0 radosgw[19620]: starting handler: beast
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 10:59:58 compute-0 radosgw[19620]: mgrc service_daemon_register rgw.14373 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.cjdyiw,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=1063f874-5e69-4914-9198-c2cdfb8f2870,zone_name=default,zonegroup_id=59510648-2c54-408c-beb4-010e0f01e98d,zonegroup_name=default}
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 radosgw[19620]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rgw'
Oct  9 10:59:58 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 10:59:58 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 10:59:58 compute-0 ceph-mon[4705]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 10:59:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:58.784+0000 7f8bd78ea140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 10:59:58 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rook'
Oct  9 10:59:59 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Oct  9 10:59:59 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Oct  9 10:59:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:59.379+0000 7f8bd78ea140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'selftest'
Oct  9 10:59:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:59.451+0000 7f8bd78ea140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'snap_schedule'
Oct  9 10:59:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:59.540+0000 7f8bd78ea140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'stats'
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'status'
Oct  9 10:59:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:59.687+0000 7f8bd78ea140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telegraf'
Oct  9 10:59:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:59.757+0000 7f8bd78ea140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telemetry'
Oct  9 10:59:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T10:59:59.916+0000 7f8bd78ea140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 10:59:59 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv restarted
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv started
Oct  9 11:00:00 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  9 11:00:00 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  9 11:00:00 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:00.166+0000 7f8bd78ea140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'volumes'
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm restarted
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm started
Oct  9 11:00:00 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:00.455+0000 7f8bd78ea140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'zabbix'
Oct  9 11:00:00 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:00.535+0000 7f8bd78ea140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izrudc
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: ms_deliver_dispatch: unhandled message 0x55d73226b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr handle_mgr_map Activating!
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr handle_mgr_map I am now activating
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.izrudc(active, starting, since 0.0679689s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: balancer
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Manager daemon compute-0.izrudc is now available
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [balancer INFO root] Starting
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_11:00:00
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: cephadm
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: crash
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: dashboard
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: devicehealth
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO sso] Loading SSO DB version=1
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Starting
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: iostat
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: nfs
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: orchestrator
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: pg_autoscaler
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: progress
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [progress INFO root] Loading...
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f8b564324c0>, <progress.module.GhostEvent object at 0x7f8b564324f0>, <progress.module.GhostEvent object at 0x7f8b56432520>, <progress.module.GhostEvent object at 0x7f8b56432550>, <progress.module.GhostEvent object at 0x7f8b56432580>, <progress.module.GhostEvent object at 0x7f8b564325b0>, <progress.module.GhostEvent object at 0x7f8b564325e0>, <progress.module.GhostEvent object at 0x7f8b56432610>, <progress.module.GhostEvent object at 0x7f8b56432640>, <progress.module.GhostEvent object at 0x7f8b56432670>, <progress.module.GhostEvent object at 0x7f8b564326a0>] historic events
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] recovery thread starting
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] starting setup
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: rbd_support
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: restful
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: status
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [restful WARNING root] server not running: no certificate configured
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: telemetry
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:00:00 compute-0 ceph-mon[4705]: overall HEALTH_OK
Oct  9 11:00:00 compute-0 ceph-mon[4705]: Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:00 compute-0 ceph-mon[4705]: Activating manager daemon compute-0.izrudc
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] PerfHandler: starting
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: volumes
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TaskHandler: starting
Oct  9 11:00:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"} v 0)
Oct  9 11:00:00 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [rbd_support INFO root] setup complete
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  9 11:00:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  9 11:00:00 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  9 11:00:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 11:00:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct  9 11:00:01 compute-0 systemd-logind[846]: New session 19 of user ceph-admin.
Oct  9 11:00:01 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Oct  9 11:00:01 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  9 11:00:01 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  9 11:00:01 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.module] Engine started.
Oct  9 11:00:01 compute-0 podman[20912]: 2025-10-09 11:00:01.858470155 +0000 UTC m=+0.066738602 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:01 compute-0 podman[20912]: 2025-10-09 11:00:01.956235398 +0000 UTC m=+0.164503845 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 11:00:02 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  9 11:00:02 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:02 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.izrudc(active, since 1.76721s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Oct  9 11:00:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:02] ENGINE Bus STARTING
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:02] ENGINE Bus STARTING
Oct  9 11:00:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:02 compute-0 podman[21047]: 2025-10-09 11:00:02.538546469 +0000 UTC m=+0.109909341 container exec 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:00:02 compute-0 podman[21047]: 2025-10-09 11:00:02.585325846 +0000 UTC m=+0.156688718 container exec_died 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:02] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:02] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:02] ENGINE Client ('192.168.122.100', 60264) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:02] ENGINE Client ('192.168.122.100', 60264) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 11:00:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:02] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:02] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:02] ENGINE Bus STARTED
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:02] ENGINE Bus STARTED
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Check health
Oct  9 11:00:02 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:03 compute-0 ceph-mon[4705]: Manager daemon compute-0.izrudc is now available
Oct  9 11:00:03 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 11:00:03 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 11:00:03 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct  9 11:00:03 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct  9 11:00:04 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:04 compute-0 admiring_jang[20513]: Option GRAFANA_API_USERNAME updated
Oct  9 11:00:04 compute-0 systemd[1]: libpod-db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f.scope: Deactivated successfully.
Oct  9 11:00:04 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  9 11:00:04 compute-0 podman[21118]: 2025-10-09 11:00:04.104093538 +0000 UTC m=+0.022456525 container died db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f (image=quay.io/ceph/ceph:v19, name=admiring_jang, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:04 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  9 11:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a449363bf3e65fb0d3bd54fda71bcb44706d0079fa58cecab24c238d5a441d-merged.mount: Deactivated successfully.
Oct  9 11:00:04 compute-0 podman[21118]: 2025-10-09 11:00:04.139862047 +0000 UTC m=+0.058225004 container remove db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f (image=quay.io/ceph/ceph:v19, name=admiring_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 11:00:04 compute-0 systemd[1]: libpod-conmon-db7f3a6ac8ff9dff6e70be4ea452b5b1f1be69f936537e6c49efaa45acbf297f.scope: Deactivated successfully.
Oct  9 11:00:04 compute-0 python3[21158]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Oct  9 11:00:04 compute-0 podman[21159]: 2025-10-09 11:00:04.520565213 +0000 UTC m=+0.036506272 container create 6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c (image=quay.io/ceph/ceph:v19, name=focused_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  9 11:00:04 compute-0 systemd[1]: Started libpod-conmon-6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c.scope.
Oct  9 11:00:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f309dfceb3cd5d3057fc6a018908abd227804eb6b064bbaccbb8ee5e6c765353/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f309dfceb3cd5d3057fc6a018908abd227804eb6b064bbaccbb8ee5e6c765353/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f309dfceb3cd5d3057fc6a018908abd227804eb6b064bbaccbb8ee5e6c765353/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:04 compute-0 podman[21159]: 2025-10-09 11:00:04.597833114 +0000 UTC m=+0.113774203 container init 6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c (image=quay.io/ceph/ceph:v19, name=focused_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:04 compute-0 podman[21159]: 2025-10-09 11:00:04.504268139 +0000 UTC m=+0.020209218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:04 compute-0 podman[21159]: 2025-10-09 11:00:04.60739516 +0000 UTC m=+0.123336219 container start 6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c (image=quay.io/ceph/ceph:v19, name=focused_knuth, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 11:00:04 compute-0 podman[21159]: 2025-10-09 11:00:04.610425959 +0000 UTC m=+0.126367038 container attach 6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c (image=quay.io/ceph/ceph:v19, name=focused_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:04 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:04 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:02] ENGINE Bus STARTING
Oct  9 11:00:04 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:02] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:00:04 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:02] ENGINE Client ('192.168.122.100', 60264) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:00:04 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:02] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:00:04 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:02] ENGINE Bus STARTED
Oct  9 11:00:04 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:04 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.izrudc(active, since 4s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:04 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:04 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:04 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Oct  9 11:00:05 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.14 deep-scrub starts
Oct  9 11:00:05 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.14 deep-scrub ok
Oct  9 11:00:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:05 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:06 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct  9 11:00:06 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct  9 11:00:06 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:06 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:06 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:06 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:06 compute-0 focused_knuth[21174]: Option GRAFANA_API_PASSWORD updated
Oct  9 11:00:06 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.izrudc(active, since 6s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:06 compute-0 systemd[1]: libpod-6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c.scope: Deactivated successfully.
Oct  9 11:00:06 compute-0 podman[21159]: 2025-10-09 11:00:06.643666852 +0000 UTC m=+2.159607921 container died 6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c (image=quay.io/ceph/ceph:v19, name=focused_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f309dfceb3cd5d3057fc6a018908abd227804eb6b064bbaccbb8ee5e6c765353-merged.mount: Deactivated successfully.
Oct  9 11:00:06 compute-0 podman[21159]: 2025-10-09 11:00:06.681573035 +0000 UTC m=+2.197514094 container remove 6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c (image=quay.io/ceph/ceph:v19, name=focused_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  9 11:00:06 compute-0 systemd[1]: libpod-conmon-6b6ad853e300f9ec66a941203f444c0d27c582dd89eb20f07a4f6aadf0ad2e5c.scope: Deactivated successfully.
Oct  9 11:00:07 compute-0 python3[21235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:07 compute-0 podman[21236]: 2025-10-09 11:00:07.053124823 +0000 UTC m=+0.036755717 container create 5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715 (image=quay.io/ceph/ceph:v19, name=nice_nightingale, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:07 compute-0 systemd[1]: Started libpod-conmon-5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715.scope.
Oct  9 11:00:07 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct  9 11:00:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c975d6691b9ca9e5827c612662d295ea5a962113a974a2ae23021cb54b4fb9c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c975d6691b9ca9e5827c612662d295ea5a962113a974a2ae23021cb54b4fb9c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c975d6691b9ca9e5827c612662d295ea5a962113a974a2ae23021cb54b4fb9c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:07 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct  9 11:00:07 compute-0 podman[21236]: 2025-10-09 11:00:07.119916395 +0000 UTC m=+0.103547289 container init 5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715 (image=quay.io/ceph/ceph:v19, name=nice_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:07 compute-0 podman[21236]: 2025-10-09 11:00:07.127054341 +0000 UTC m=+0.110685235 container start 5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715 (image=quay.io/ceph/ceph:v19, name=nice_nightingale, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 11:00:07 compute-0 podman[21236]: 2025-10-09 11:00:07.13070396 +0000 UTC m=+0.114334854 container attach 5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715 (image=quay.io/ceph/ceph:v19, name=nice_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:07 compute-0 podman[21236]: 2025-10-09 11:00:07.037110384 +0000 UTC m=+0.020741298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:07 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:07 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:07 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14427 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Oct  9 11:00:07 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:08 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct  9 11:00:08 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct  9 11:00:08 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:08 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:08 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:08 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:08 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s
Oct  9 11:00:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:09 compute-0 nice_nightingale[21250]: Option ALERTMANAGER_API_HOST updated
Oct  9 11:00:09 compute-0 systemd[1]: libpod-5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715.scope: Deactivated successfully.
Oct  9 11:00:09 compute-0 podman[21236]: 2025-10-09 11:00:09.058446074 +0000 UTC m=+2.042076968 container died 5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715 (image=quay.io/ceph/ceph:v19, name=nice_nightingale, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 11:00:09 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct  9 11:00:09 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct  9 11:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c975d6691b9ca9e5827c612662d295ea5a962113a974a2ae23021cb54b4fb9c6-merged.mount: Deactivated successfully.
Oct  9 11:00:09 compute-0 podman[21236]: 2025-10-09 11:00:09.215169192 +0000 UTC m=+2.198800076 container remove 5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715 (image=quay.io/ceph/ceph:v19, name=nice_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 11:00:09 compute-0 systemd[1]: libpod-conmon-5d3db50c290172819ce1fcb516ff27ac2f5ac6113da53a9492b08b27102c1715.scope: Deactivated successfully.
Oct  9 11:00:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 11:00:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:09 compute-0 python3[21464]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:09 compute-0 podman[21465]: 2025-10-09 11:00:09.575688671 +0000 UTC m=+0.047350328 container create fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099 (image=quay.io/ceph/ceph:v19, name=infallible_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:09 compute-0 systemd[1]: Started libpod-conmon-fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099.scope.
Oct  9 11:00:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5811529246ffaa247032deb8ed5b1c73289caea3778f9e18b53aa7b3753ad9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5811529246ffaa247032deb8ed5b1c73289caea3778f9e18b53aa7b3753ad9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5811529246ffaa247032deb8ed5b1c73289caea3778f9e18b53aa7b3753ad9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:09 compute-0 podman[21465]: 2025-10-09 11:00:09.553344199 +0000 UTC m=+0.025005876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:09 compute-0 podman[21465]: 2025-10-09 11:00:09.656845596 +0000 UTC m=+0.128507283 container init fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099 (image=quay.io/ceph/ceph:v19, name=infallible_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 11:00:09 compute-0 podman[21465]: 2025-10-09 11:00:09.662963955 +0000 UTC m=+0.134625612 container start fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099 (image=quay.io/ceph/ceph:v19, name=infallible_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 11:00:09 compute-0 podman[21465]: 2025-10-09 11:00:09.666690026 +0000 UTC m=+0.138351683 container attach fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099 (image=quay.io/ceph/ceph:v19, name=infallible_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 11:00:10 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:10 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:10 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:10 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:10 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:10 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  9 11:00:10 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:10 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Oct  9 11:00:10 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.1d deep-scrub starts
Oct  9 11:00:10 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 6.1d deep-scrub ok
Oct  9 11:00:10 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v7: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Oct  9 11:00:11 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Oct  9 11:00:11 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Oct  9 11:00:11 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:12 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:12 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:12 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  9 11:00:12 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  9 11:00:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:12 compute-0 infallible_antonelli[21481]: Option PROMETHEUS_API_HOST updated
Oct  9 11:00:12 compute-0 systemd[1]: libpod-fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099.scope: Deactivated successfully.
Oct  9 11:00:12 compute-0 conmon[21481]: conmon fdf983c33dadac050fac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099.scope/container/memory.events
Oct  9 11:00:12 compute-0 podman[21465]: 2025-10-09 11:00:12.680523301 +0000 UTC m=+3.152184958 container died fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099 (image=quay.io/ceph/ceph:v19, name=infallible_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 11:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f5811529246ffaa247032deb8ed5b1c73289caea3778f9e18b53aa7b3753ad9-merged.mount: Deactivated successfully.
Oct  9 11:00:12 compute-0 podman[21465]: 2025-10-09 11:00:12.725780964 +0000 UTC m=+3.197442621 container remove fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099 (image=quay.io/ceph/ceph:v19, name=infallible_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 11:00:12 compute-0 systemd[1]: libpod-conmon-fdf983c33dadac050facf9c90b2ca7ddb710c887ee3fb72f209fb323db366099.scope: Deactivated successfully.
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 11:00:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:00:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:00:13 compute-0 python3[21542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:13 compute-0 podman[21593]: 2025-10-09 11:00:13.129041316 +0000 UTC m=+0.056073229 container create 8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d (image=quay.io/ceph/ceph:v19, name=mystifying_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 11:00:13 compute-0 systemd[1]: Started libpod-conmon-8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d.scope.
Oct  9 11:00:13 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  9 11:00:13 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  9 11:00:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:13 compute-0 podman[21593]: 2025-10-09 11:00:13.103848948 +0000 UTC m=+0.030880881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4af60f6e0804c5eac7cbb41396ff5edc329c475049952afd8c259da2b48fdc4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4af60f6e0804c5eac7cbb41396ff5edc329c475049952afd8c259da2b48fdc4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4af60f6e0804c5eac7cbb41396ff5edc329c475049952afd8c259da2b48fdc4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:13 compute-0 podman[21593]: 2025-10-09 11:00:13.224709376 +0000 UTC m=+0.151741319 container init 8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d (image=quay.io/ceph/ceph:v19, name=mystifying_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 11:00:13 compute-0 podman[21593]: 2025-10-09 11:00:13.232622084 +0000 UTC m=+0.159653997 container start 8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d (image=quay.io/ceph/ceph:v19, name=mystifying_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:13 compute-0 podman[21593]: 2025-10-09 11:00:13.236161911 +0000 UTC m=+0.163193824 container attach 8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d (image=quay.io/ceph/ceph:v19, name=mystifying_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:13 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14439 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:13 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:13 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  9 11:00:14 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v9: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:14 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:14 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:14 compute-0 mystifying_ride[21655]: Option GRAFANA_API_URL updated
Oct  9 11:00:14 compute-0 systemd[1]: libpod-8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d.scope: Deactivated successfully.
Oct  9 11:00:14 compute-0 podman[21593]: 2025-10-09 11:00:14.855641692 +0000 UTC m=+1.782673605 container died 8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d (image=quay.io/ceph/ceph:v19, name=mystifying_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4af60f6e0804c5eac7cbb41396ff5edc329c475049952afd8c259da2b48fdc4-merged.mount: Deactivated successfully.
Oct  9 11:00:14 compute-0 podman[21593]: 2025-10-09 11:00:14.898467896 +0000 UTC m=+1.825499809 container remove 8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d (image=quay.io/ceph/ceph:v19, name=mystifying_ride, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 11:00:14 compute-0 systemd[1]: libpod-conmon-8eea8eb240731dad1fc4b8be83331f2be70d59c4db5d0c3f046ccf2c5c40077d.scope: Deactivated successfully.
Oct  9 11:00:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:15 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:00:15 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:00:15 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:00:15 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:15 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:15 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:15 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:15 compute-0 python3[22486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:15 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  9 11:00:15 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  9 11:00:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:15 compute-0 podman[22511]: 2025-10-09 11:00:15.279211013 +0000 UTC m=+0.053383576 container create fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19 (image=quay.io/ceph/ceph:v19, name=great_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Oct  9 11:00:15 compute-0 systemd[1]: Started libpod-conmon-fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19.scope.
Oct  9 11:00:15 compute-0 podman[22511]: 2025-10-09 11:00:15.25373387 +0000 UTC m=+0.027906483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:15 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99cdbd6dd68e3ea6daeeeaeb875bd97c55c5ee3e9418a13244963f89750284f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99cdbd6dd68e3ea6daeeeaeb875bd97c55c5ee3e9418a13244963f89750284f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99cdbd6dd68e3ea6daeeeaeb875bd97c55c5ee3e9418a13244963f89750284f3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:15 compute-0 podman[22511]: 2025-10-09 11:00:15.386059064 +0000 UTC m=+0.160231627 container init fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19 (image=quay.io/ceph/ceph:v19, name=great_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:15 compute-0 podman[22511]: 2025-10-09 11:00:15.393948811 +0000 UTC m=+0.168121364 container start fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19 (image=quay.io/ceph/ceph:v19, name=great_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 11:00:15 compute-0 podman[22511]: 2025-10-09 11:00:15.39754853 +0000 UTC m=+0.171721053 container attach fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19 (image=quay.io/ceph/ceph:v19, name=great_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:15 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct  9 11:00:15 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/622963856' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 11:00:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:15 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Oct  9 11:00:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Oct  9 11:00:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:16 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:16 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:16 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:16 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:16 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:16 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:16 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/622963856' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 11:00:16 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 11:00:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 7c0edd5c-849b-425c-b5b6-a957eada966c (Updating node-exporter deployment (+2 -> 3))
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v10: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s
Oct  9 11:00:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/622963856' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  1: '-n'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  2: 'mgr.compute-0.izrudc'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  3: '-f'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  4: '--setuser'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  5: 'ceph'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  6: '--setgroup'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  7: 'ceph'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: mgr respawn  exe_path /proc/self/exe
Oct  9 11:00:16 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.izrudc(active, since 16s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:16 compute-0 systemd[1]: libpod-fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19.scope: Deactivated successfully.
Oct  9 11:00:16 compute-0 podman[22511]: 2025-10-09 11:00:16.895113056 +0000 UTC m=+1.669285609 container died fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19 (image=quay.io/ceph/ceph:v19, name=great_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-99cdbd6dd68e3ea6daeeeaeb875bd97c55c5ee3e9418a13244963f89750284f3-merged.mount: Deactivated successfully.
Oct  9 11:00:16 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct  9 11:00:16 compute-0 systemd[1]: session-19.scope: Consumed 4.702s CPU time.
Oct  9 11:00:16 compute-0 systemd-logind[846]: Session 19 logged out. Waiting for processes to exit.
Oct  9 11:00:16 compute-0 systemd-logind[846]: Removed session 19.
Oct  9 11:00:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setuser ceph since I am not root
Oct  9 11:00:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setgroup ceph since I am not root
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 11:00:16 compute-0 ceph-mgr[4997]: pidfile_write: ignore empty --pid-file
Oct  9 11:00:17 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'alerts'
Oct  9 11:00:17 compute-0 podman[22511]: 2025-10-09 11:00:17.022142044 +0000 UTC m=+1.796314577 container remove fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19 (image=quay.io/ceph/ceph:v19, name=great_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 11:00:17 compute-0 systemd[1]: libpod-conmon-fa22eb01220aa392ff3738470a54fd7b423734fc1b78765b9ceb4b90aa69cd19.scope: Deactivated successfully.
Oct  9 11:00:17 compute-0 ceph-mgr[4997]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 11:00:17 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:17.116+0000 7f231477e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 11:00:17 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'balancer'
Oct  9 11:00:17 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  9 11:00:17 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  9 11:00:17 compute-0 ceph-mgr[4997]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 11:00:17 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:17.203+0000 7f231477e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 11:00:17 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'cephadm'
Oct  9 11:00:17 compute-0 python3[22608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:17 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:17 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:17 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:17 compute-0 ceph-mon[4705]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:17 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/622963856' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 11:00:17 compute-0 podman[22609]: 2025-10-09 11:00:17.391800021 +0000 UTC m=+0.072201640 container create 3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92 (image=quay.io/ceph/ceph:v19, name=heuristic_euler, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:17 compute-0 podman[22609]: 2025-10-09 11:00:17.339654057 +0000 UTC m=+0.020055686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:17 compute-0 systemd[1]: Started libpod-conmon-3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92.scope.
Oct  9 11:00:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b320cfa7af563c284b63f867f52a9d271e0c245c6b0027a1872cc6187f151e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b320cfa7af563c284b63f867f52a9d271e0c245c6b0027a1872cc6187f151e7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b320cfa7af563c284b63f867f52a9d271e0c245c6b0027a1872cc6187f151e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:17 compute-0 podman[22609]: 2025-10-09 11:00:17.510490244 +0000 UTC m=+0.190891873 container init 3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92 (image=quay.io/ceph/ceph:v19, name=heuristic_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 11:00:17 compute-0 podman[22609]: 2025-10-09 11:00:17.517258773 +0000 UTC m=+0.197660392 container start 3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92 (image=quay.io/ceph/ceph:v19, name=heuristic_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:17 compute-0 podman[22609]: 2025-10-09 11:00:17.520634988 +0000 UTC m=+0.201036597 container attach 3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92 (image=quay.io/ceph/ceph:v19, name=heuristic_euler, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct  9 11:00:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct  9 11:00:17 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3692881939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 11:00:17 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'crash'
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'dashboard'
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:18.032+0000 7f231477e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  9 11:00:18 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  9 11:00:18 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3692881939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'devicehealth'
Oct  9 11:00:18 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3692881939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 11:00:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.izrudc(active, since 18s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:18 compute-0 systemd[1]: libpod-3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92.scope: Deactivated successfully.
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:18.707+0000 7f231477e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 11:00:18 compute-0 podman[22661]: 2025-10-09 11:00:18.710048903 +0000 UTC m=+0.023505323 container died 3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92 (image=quay.io/ceph/ceph:v19, name=heuristic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 11:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b320cfa7af563c284b63f867f52a9d271e0c245c6b0027a1872cc6187f151e7-merged.mount: Deactivated successfully.
Oct  9 11:00:18 compute-0 podman[22661]: 2025-10-09 11:00:18.743711571 +0000 UTC m=+0.057167961 container remove 3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92 (image=quay.io/ceph/ceph:v19, name=heuristic_euler, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 11:00:18 compute-0 systemd[1]: libpod-conmon-3f33b9b96334d5168051bb645aac41a67491c4ee748475f921c165055ce8fe92.scope: Deactivated successfully.
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  from numpy import show_config as show_numpy_config
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:18.889+0000 7f231477e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'influx'
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:18.964+0000 7f231477e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 11:00:18 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'insights'
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'iostat'
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 11:00:19 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:19.114+0000 7f231477e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'k8sevents'
Oct  9 11:00:19 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct  9 11:00:19 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct  9 11:00:19 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3692881939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 11:00:19 compute-0 python3[22749]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'localpool'
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 11:00:19 compute-0 python3[22820]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007619.2635617-33968-13312890496086/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mirroring'
Oct  9 11:00:19 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'nfs'
Oct  9 11:00:20 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  9 11:00:20 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  9 11:00:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:20.141+0000 7f231477e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'orchestrator'
Oct  9 11:00:20 compute-0 python3[22870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:20 compute-0 podman[22871]: 2025-10-09 11:00:20.270016603 +0000 UTC m=+0.034876837 container create 37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14 (image=quay.io/ceph/ceph:v19, name=zen_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 11:00:20 compute-0 systemd[1]: Started libpod-conmon-37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14.scope.
Oct  9 11:00:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6588382b72aa47532ae16a0a7beaa5381f61c25dca0eb2d1d309d734380ee5c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6588382b72aa47532ae16a0a7beaa5381f61c25dca0eb2d1d309d734380ee5c1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6588382b72aa47532ae16a0a7beaa5381f61c25dca0eb2d1d309d734380ee5c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:20 compute-0 podman[22871]: 2025-10-09 11:00:20.338880809 +0000 UTC m=+0.103741063 container init 37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14 (image=quay.io/ceph/ceph:v19, name=zen_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:20 compute-0 podman[22871]: 2025-10-09 11:00:20.254423674 +0000 UTC m=+0.019283928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:20 compute-0 podman[22871]: 2025-10-09 11:00:20.35172237 +0000 UTC m=+0.116582604 container start 37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14 (image=quay.io/ceph/ceph:v19, name=zen_shaw, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 11:00:20 compute-0 podman[22871]: 2025-10-09 11:00:20.355082318 +0000 UTC m=+0.119942572 container attach 37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14 (image=quay.io/ceph/ceph:v19, name=zen_shaw, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:20.386+0000 7f231477e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 11:00:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  9 11:00:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  9 11:00:20 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  9 11:00:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:20.465+0000 7f231477e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_support'
Oct  9 11:00:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:20.537+0000 7f231477e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 11:00:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:20.615+0000 7f231477e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'progress'
Oct  9 11:00:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:20.691+0000 7f231477e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 11:00:20 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'prometheus'
Oct  9 11:00:21 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:21.058+0000 7f231477e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rbd_support'
Oct  9 11:00:21 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Oct  9 11:00:21 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Oct  9 11:00:21 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:21.158+0000 7f231477e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'restful'
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rgw'
Oct  9 11:00:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  9 11:00:21 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:21.622+0000 7f231477e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 11:00:21 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rook'
Oct  9 11:00:22 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  9 11:00:22 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.208+0000 7f231477e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'selftest'
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.282+0000 7f231477e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'snap_schedule'
Oct  9 11:00:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  9 11:00:22 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.363+0000 7f231477e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'stats'
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'status'
Oct  9 11:00:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.510+0000 7f231477e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telegraf'
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.584+0000 7f231477e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telemetry'
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.751+0000 7f231477e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 11:00:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:22.984+0000 7f231477e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:22 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'volumes'
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv restarted
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv started
Oct  9 11:00:23 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  9 11:00:23 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  9 11:00:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:23.254+0000 7f231477e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'zabbix'
Oct  9 11:00:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  9 11:00:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:23.325+0000 7f231477e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mgr e19 prepare_beacon:  waiting for osdmon writeable to blocklist old instance.
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: ms_deliver_dispatch: unhandled message 0x55557d3d5860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  1: '-n'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  2: 'mgr.compute-0.izrudc'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  3: '-f'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  4: '--setuser'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  5: 'ceph'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  6: '--setgroup'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  7: 'ceph'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr respawn  exe_path /proc/self/exe
Oct  9 11:00:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  9 11:00:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e46 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izrudc
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  9 11:00:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setuser ceph since I am not root
Oct  9 11:00:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setgroup ceph since I am not root
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: pidfile_write: ignore empty --pid-file
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'alerts'
Oct  9 11:00:23 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  9 11:00:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:23.540+0000 7f5abde1f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'balancer'
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.izrudc(active, starting, since 0.220637s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm restarted
Oct  9 11:00:23 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm started
Oct  9 11:00:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:23.630+0000 7f5abde1f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 11:00:23 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'cephadm'
Oct  9 11:00:23 compute-0 ceph-mon[4705]: Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:23 compute-0 ceph-mon[4705]: Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:23 compute-0 ceph-mon[4705]: Activating manager daemon compute-0.izrudc
Oct  9 11:00:24 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  9 11:00:24 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  9 11:00:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'crash'
Oct  9 11:00:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:24.455+0000 7f5abde1f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 11:00:24 compute-0 ceph-mgr[4997]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 11:00:24 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'dashboard'
Oct  9 11:00:24 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.izrudc(active, starting, since 1.26552s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:25 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'devicehealth'
Oct  9 11:00:25 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:25.124+0000 7f5abde1f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  from numpy import show_config as show_numpy_config
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:25.294+0000 7f5abde1f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'influx'
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:25.367+0000 7f5abde1f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'insights'
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'iostat'
Oct  9 11:00:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:25.504+0000 7f5abde1f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'k8sevents'
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'localpool'
Oct  9 11:00:25 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 11:00:26 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  9 11:00:26 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mirroring'
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'nfs'
Oct  9 11:00:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:26.524+0000 7f5abde1f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'orchestrator'
Oct  9 11:00:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:26.750+0000 7f5abde1f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 11:00:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:26.837+0000 7f5abde1f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_support'
Oct  9 11:00:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:26.910+0000 7f5abde1f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 11:00:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:26.987+0000 7f5abde1f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 11:00:26 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'progress'
Oct  9 11:00:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:27.061+0000 7f5abde1f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'prometheus'
Oct  9 11:00:27 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct  9 11:00:27 compute-0 systemd[6033]: Activating special unit Exit the Session...
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped target Main User Target.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped target Basic System.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped target Paths.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped target Sockets.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped target Timers.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 11:00:27 compute-0 systemd[6033]: Closed D-Bus User Message Bus Socket.
Oct  9 11:00:27 compute-0 systemd[6033]: Stopped Create User's Volatile Files and Directories.
Oct  9 11:00:27 compute-0 systemd[6033]: Removed slice User Application Slice.
Oct  9 11:00:27 compute-0 systemd[6033]: Reached target Shutdown.
Oct  9 11:00:27 compute-0 systemd[6033]: Finished Exit the Session.
Oct  9 11:00:27 compute-0 systemd[6033]: Reached target Exit the Session.
Oct  9 11:00:27 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct  9 11:00:27 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct  9 11:00:27 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  9 11:00:27 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  9 11:00:27 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  9 11:00:27 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  9 11:00:27 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct  9 11:00:27 compute-0 systemd[1]: user-42477.slice: Consumed 30.088s CPU time.
Oct  9 11:00:27 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  9 11:00:27 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  9 11:00:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:27.432+0000 7f5abde1f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rbd_support'
Oct  9 11:00:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:27.536+0000 7f5abde1f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'restful'
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rgw'
Oct  9 11:00:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:27.994+0000 7f5abde1f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 11:00:27 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rook'
Oct  9 11:00:28 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  9 11:00:28 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  9 11:00:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:28.596+0000 7f5abde1f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'selftest'
Oct  9 11:00:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:28.669+0000 7f5abde1f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'snap_schedule'
Oct  9 11:00:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:28.750+0000 7f5abde1f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'stats'
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'status'
Oct  9 11:00:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:28.900+0000 7f5abde1f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telegraf'
Oct  9 11:00:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:28.976+0000 7f5abde1f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 11:00:28 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telemetry'
Oct  9 11:00:29 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  9 11:00:29 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  9 11:00:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:29.149+0000 7f5abde1f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv restarted
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv started
Oct  9 11:00:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:29.378+0000 7f5abde1f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'volumes'
Oct  9 11:00:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:29.641+0000 7f5abde1f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'zabbix'
Oct  9 11:00:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:00:29.729+0000 7f5abde1f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izrudc
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: ms_deliver_dispatch: unhandled message 0x55a9a043f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr handle_mgr_map Activating!
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.izrudc(active, starting, since 0.0750165s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr handle_mgr_map I am now activating
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: balancer
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [balancer INFO root] Starting
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Manager daemon compute-0.izrudc is now available
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_11:00:29
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: cephadm
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: crash
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: dashboard
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [dashboard INFO sso] Loading SSO DB version=1
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: devicehealth
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Starting
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: iostat
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: nfs
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: orchestrator
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: pg_autoscaler
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: progress
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [progress INFO root] Loading...
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f5a3b1893d0>, <progress.module.GhostEvent object at 0x7f5a3b1896d0>, <progress.module.GhostEvent object at 0x7f5a3b189700>, <progress.module.GhostEvent object at 0x7f5a3b189730>, <progress.module.GhostEvent object at 0x7f5a3b189760>, <progress.module.GhostEvent object at 0x7f5a3b189790>, <progress.module.GhostEvent object at 0x7f5a3b1897c0>, <progress.module.GhostEvent object at 0x7f5a3b1897f0>, <progress.module.GhostEvent object at 0x7f5a3b189820>, <progress.module.GhostEvent object at 0x7f5a3b189850>, <progress.module.GhostEvent object at 0x7f5a3b189880>] historic events
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] recovery thread starting
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] starting setup
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: rbd_support
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: restful
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: status
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: telemetry
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [restful WARNING root] server not running: no certificate configured
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] PerfHandler: starting
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  9 11:00:29 compute-0 ceph-mon[4705]: Active manager daemon compute-0.izrudc restarted
Oct  9 11:00:29 compute-0 ceph-mon[4705]: Activating manager daemon compute-0.izrudc
Oct  9 11:00:29 compute-0 ceph-mon[4705]: Manager daemon compute-0.izrudc is now available
Oct  9 11:00:29 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TaskHandler: starting
Oct  9 11:00:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"} v 0)
Oct  9 11:00:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: volumes
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 11:00:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] setup complete
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  9 11:00:30 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  9 11:00:30 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  9 11:00:30 compute-0 systemd-logind[846]: New session 20 of user ceph-admin.
Oct  9 11:00:30 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 11:00:30 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 11:00:30 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 11:00:30 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm restarted
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm started
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.module] Engine started.
Oct  9 11:00:30 compute-0 systemd[23076]: Queued start job for default target Main User Target.
Oct  9 11:00:30 compute-0 systemd[23076]: Created slice User Application Slice.
Oct  9 11:00:30 compute-0 systemd[23076]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 11:00:30 compute-0 systemd[23076]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 11:00:30 compute-0 systemd[23076]: Reached target Paths.
Oct  9 11:00:30 compute-0 systemd[23076]: Reached target Timers.
Oct  9 11:00:30 compute-0 systemd[23076]: Starting D-Bus User Message Bus Socket...
Oct  9 11:00:30 compute-0 systemd[23076]: Starting Create User's Volatile Files and Directories...
Oct  9 11:00:30 compute-0 systemd[23076]: Listening on D-Bus User Message Bus Socket.
Oct  9 11:00:30 compute-0 systemd[23076]: Reached target Sockets.
Oct  9 11:00:30 compute-0 systemd[23076]: Finished Create User's Volatile Files and Directories.
Oct  9 11:00:30 compute-0 systemd[23076]: Reached target Basic System.
Oct  9 11:00:30 compute-0 systemd[23076]: Reached target Main User Target.
Oct  9 11:00:30 compute-0 systemd[23076]: Startup finished in 117ms.
Oct  9 11:00:30 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  9 11:00:30 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.izrudc(active, since 1.10082s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14475 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e2 new map
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-10-09T11:00:30:843595+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:30.843558+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  9 11:00:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0[4701]: 2025-10-09T11:00:30.842+0000 7f6d8de0c640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:30 compute-0 ceph-mgr[4997]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  9 11:00:30 compute-0 systemd[1]: libpod-37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14.scope: Deactivated successfully.
Oct  9 11:00:30 compute-0 podman[23182]: 2025-10-09 11:00:30.927522066 +0000 UTC m=+0.023255395 container died 37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14 (image=quay.io/ceph/ceph:v19, name=zen_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  9 11:00:30 compute-0 ceph-mon[4705]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  9 11:00:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  9 11:00:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6588382b72aa47532ae16a0a7beaa5381f61c25dca0eb2d1d309d734380ee5c1-merged.mount: Deactivated successfully.
Oct  9 11:00:30 compute-0 podman[23182]: 2025-10-09 11:00:30.970608246 +0000 UTC m=+0.066341555 container remove 37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14 (image=quay.io/ceph/ceph:v19, name=zen_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:30 compute-0 systemd[1]: libpod-conmon-37b404d3fc1c6da6166a98dc8b9ceb1eb99abef9fec6aa2a657b88f633c31c14.scope: Deactivated successfully.
Oct  9 11:00:31 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Oct  9 11:00:31 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Oct  9 11:00:31 compute-0 podman[23227]: 2025-10-09 11:00:31.127960236 +0000 UTC m=+0.061454579 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 11:00:31 compute-0 podman[23227]: 2025-10-09 11:00:31.213293278 +0000 UTC m=+0.146787601 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 11:00:31 compute-0 python3[23270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.327747374 +0000 UTC m=+0.040363384 container create edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd (image=quay.io/ceph/ceph:v19, name=great_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 11:00:31 compute-0 systemd[1]: Started libpod-conmon-edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd.scope.
Oct  9 11:00:31 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d32792cefa09ca719004bc935a29d58442aa8c732cbb937d3ff4ba5443fb005/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d32792cefa09ca719004bc935a29d58442aa8c732cbb937d3ff4ba5443fb005/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d32792cefa09ca719004bc935a29d58442aa8c732cbb937d3ff4ba5443fb005/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.307222377 +0000 UTC m=+0.019838417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:31] ENGINE Bus STARTING
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:31] ENGINE Bus STARTING
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.479055709 +0000 UTC m=+0.191671769 container init edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd (image=quay.io/ceph/ceph:v19, name=great_keller, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.486171958 +0000 UTC m=+0.198787988 container start edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd (image=quay.io/ceph/ceph:v19, name=great_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.506053685 +0000 UTC m=+0.218669695 container attach edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd (image=quay.io/ceph/ceph:v19, name=great_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:31] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:31] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:31] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:31] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:31] ENGINE Bus STARTED
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:31] ENGINE Bus STARTED
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:00:31] ENGINE Client ('192.168.122.100', 57166) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:00:31] ENGINE Client ('192.168.122.100', 57166) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 11:00:31 compute-0 podman[23448]: 2025-10-09 11:00:31.835286229 +0000 UTC m=+0.057638187 container exec 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 great_keller[23334]: Scheduled mds.cephfs update...
Oct  9 11:00:31 compute-0 podman[23448]: 2025-10-09 11:00:31.870260999 +0000 UTC m=+0.092612957 container exec_died 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:00:31 compute-0 systemd[1]: libpod-edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd.scope: Deactivated successfully.
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.889379941 +0000 UTC m=+0.601995951 container died edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd (image=quay.io/ceph/ceph:v19, name=great_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 11:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d32792cefa09ca719004bc935a29d58442aa8c732cbb937d3ff4ba5443fb005-merged.mount: Deactivated successfully.
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:31 compute-0 podman[23294]: 2025-10-09 11:00:31.932067688 +0000 UTC m=+0.644683698 container remove edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd (image=quay.io/ceph/ceph:v19, name=great_keller, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 11:00:31 compute-0 systemd[1]: libpod-conmon-edec26802df01695b4e01382a1385c5dfcfde240e10eeab66d08507cf7c15bcd.scope: Deactivated successfully.
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Check health
Oct  9 11:00:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:31 compute-0 ceph-mon[4705]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:31 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:31] ENGINE Bus STARTING
Oct  9 11:00:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  9 11:00:32 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  9 11:00:32 compute-0 python3[23580]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:32 compute-0 podman[23583]: 2025-10-09 11:00:32.282253463 +0000 UTC m=+0.040656283 container create dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be (image=quay.io/ceph/ceph:v19, name=eloquent_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 11:00:32 compute-0 systemd[1]: Started libpod-conmon-dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be.scope.
Oct  9 11:00:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fe2d5c32fdf5972d2e7c33d38a481f4e48652fa29d5b78588a108572a72410/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fe2d5c32fdf5972d2e7c33d38a481f4e48652fa29d5b78588a108572a72410/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82fe2d5c32fdf5972d2e7c33d38a481f4e48652fa29d5b78588a108572a72410/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:32 compute-0 podman[23583]: 2025-10-09 11:00:32.355611303 +0000 UTC m=+0.114014123 container init dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be (image=quay.io/ceph/ceph:v19, name=eloquent_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:32 compute-0 podman[23583]: 2025-10-09 11:00:32.264984711 +0000 UTC m=+0.023387551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:32 compute-0 podman[23583]: 2025-10-09 11:00:32.361491041 +0000 UTC m=+0.119893861 container start dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be (image=quay.io/ceph/ceph:v19, name=eloquent_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:32 compute-0 podman[23583]: 2025-10-09 11:00:32.365060226 +0000 UTC m=+0.123463076 container attach dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be (image=quay.io/ceph/ceph:v19, name=eloquent_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.izrudc(active, since 3s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:31] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:00:32 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:31] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:00:32 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:31] ENGINE Bus STARTED
Oct  9 11:00:32 compute-0 ceph-mon[4705]: [09/Oct/2025:11:00:31] ENGINE Client ('192.168.122.100', 57166) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:00:32 compute-0 ceph-mon[4705]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:00:33 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  9 11:00:33 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  9 11:00:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  9 11:00:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Oct  9 11:00:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  9 11:00:33 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 11:00:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 11:00:34 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:00:34 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:00:34 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:00:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  9 11:00:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  9 11:00:34 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  9 11:00:34 compute-0 ceph-mon[4705]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 11:00:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  9 11:00:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  9 11:00:34 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  9 11:00:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:35 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:35 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.izrudc(active, since 5s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:00:35 compute-0 ceph-mon[4705]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 85883017-c07e-4617-ae79-374cec13bcc3 (Updating node-exporter deployment (+1 -> 3))
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 11:00:35 compute-0 systemd[1]: libpod-dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be.scope: Deactivated successfully.
Oct  9 11:00:35 compute-0 podman[23583]: 2025-10-09 11:00:35.671436497 +0000 UTC m=+3.429839317 container died dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be (image=quay.io/ceph/ceph:v19, name=eloquent_hugle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 11:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-82fe2d5c32fdf5972d2e7c33d38a481f4e48652fa29d5b78588a108572a72410-merged.mount: Deactivated successfully.
Oct  9 11:00:35 compute-0 podman[23583]: 2025-10-09 11:00:35.703777712 +0000 UTC m=+3.462180532 container remove dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be (image=quay.io/ceph/ceph:v19, name=eloquent_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:35 compute-0 systemd[1]: libpod-conmon-dd63865fd7ba2c9838216207f694f88d3115e7660e220cb994ad2ae9bec6c2be.scope: Deactivated successfully.
Oct  9 11:00:35 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:00:36 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  9 11:00:36 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  9 11:00:36 compute-0 python3[24714]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 11:00:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  9 11:00:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  9 11:00:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  9 11:00:36 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:36 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:36 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:36 compute-0 ceph-mon[4705]: Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 11:00:36 compute-0 python3[24787]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007636.0506513-33999-95378374339326/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=ac6dcf0753e563ab9abb6eea6b28c18fd3517839 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 11:00:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 11:00:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.izrudc(active, since 6s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:00:37 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Oct  9 11:00:37 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Oct  9 11:00:37 compute-0 python3[24837]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.135337611 +0000 UTC m=+0.037687199 container create 4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd (image=quay.io/ceph/ceph:v19, name=gifted_wiles, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:37 compute-0 systemd[1]: Started libpod-conmon-4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd.scope.
Oct  9 11:00:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c83660ea327d5b4d33520aa3571ff7ba3a245267042b3b92059c5c1f009e30/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c83660ea327d5b4d33520aa3571ff7ba3a245267042b3b92059c5c1f009e30/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.203631378 +0000 UTC m=+0.105980986 container init 4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd (image=quay.io/ceph/ceph:v19, name=gifted_wiles, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.209590718 +0000 UTC m=+0.111940306 container start 4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd (image=quay.io/ceph/ceph:v19, name=gifted_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.213306397 +0000 UTC m=+0.115656005 container attach 4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd (image=quay.io/ceph/ceph:v19, name=gifted_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.120827666 +0000 UTC m=+0.023177264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:37 compute-0 ceph-mon[4705]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 11:00:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Oct  9 11:00:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2984779474' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  9 11:00:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2984779474' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  9 11:00:37 compute-0 systemd[1]: libpod-4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd.scope: Deactivated successfully.
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.647327518 +0000 UTC m=+0.549677106 container died 4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd (image=quay.io/ceph/ceph:v19, name=gifted_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5c83660ea327d5b4d33520aa3571ff7ba3a245267042b3b92059c5c1f009e30-merged.mount: Deactivated successfully.
Oct  9 11:00:37 compute-0 podman[24838]: 2025-10-09 11:00:37.691557584 +0000 UTC m=+0.593907172 container remove 4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd (image=quay.io/ceph/ceph:v19, name=gifted_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:37 compute-0 systemd[1]: libpod-conmon-4048eb8cc256fcfee92e8731f684387224eec93068fea6986556c57a3a71a6dd.scope: Deactivated successfully.
Oct  9 11:00:37 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:38 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  9 11:00:38 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 85883017-c07e-4617-ae79-374cec13bcc3 (Updating node-exporter deployment (+1 -> 3))
Oct  9 11:00:38 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 85883017-c07e-4617-ae79-374cec13bcc3 (Updating node-exporter deployment (+1 -> 3)) in 3 seconds
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 11:00:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:38 compute-0 python3[24915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:38 compute-0 podman[24938]: 2025-10-09 11:00:38.507422324 +0000 UTC m=+0.038753422 container create 05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a (image=quay.io/ceph/ceph:v19, name=objective_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2984779474' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/2984779474' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:38 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 11:00:38 compute-0 systemd[1]: Started libpod-conmon-05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a.scope.
Oct  9 11:00:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054cfcbe252000d04003e53a6c2b70f0509d7d6c688b70095abb5fe2e60b7ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054cfcbe252000d04003e53a6c2b70f0509d7d6c688b70095abb5fe2e60b7ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:38 compute-0 podman[24938]: 2025-10-09 11:00:38.580387581 +0000 UTC m=+0.111718699 container init 05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a (image=quay.io/ceph/ceph:v19, name=objective_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:38 compute-0 podman[24938]: 2025-10-09 11:00:38.491099151 +0000 UTC m=+0.022430269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:38 compute-0 podman[24938]: 2025-10-09 11:00:38.586260549 +0000 UTC m=+0.117591647 container start 05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a (image=quay.io/ceph/ceph:v19, name=objective_dirac, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 11:00:38 compute-0 podman[24938]: 2025-10-09 11:00:38.593070466 +0000 UTC m=+0.124401564 container attach 05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a (image=quay.io/ceph/ceph:v19, name=objective_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 11:00:38 compute-0 podman[25044]: 2025-10-09 11:00:38.895359708 +0000 UTC m=+0.033212775 container create 523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 11:00:38 compute-0 systemd[1]: Started libpod-conmon-523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a.scope.
Oct  9 11:00:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:38 compute-0 podman[25044]: 2025-10-09 11:00:38.880244554 +0000 UTC m=+0.018097651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:38 compute-0 podman[25044]: 2025-10-09 11:00:38.979015327 +0000 UTC m=+0.116868424 container init 523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 11:00:38 compute-0 podman[25044]: 2025-10-09 11:00:38.984290335 +0000 UTC m=+0.122143402 container start 523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:38 compute-0 frosty_kepler[25061]: 167 167
Oct  9 11:00:38 compute-0 systemd[1]: libpod-523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a.scope: Deactivated successfully.
Oct  9 11:00:38 compute-0 podman[25044]: 2025-10-09 11:00:38.994637648 +0000 UTC m=+0.132490745 container attach 523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 11:00:38 compute-0 podman[25044]: 2025-10-09 11:00:38.995053651 +0000 UTC m=+0.132906728 container died 523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-670303f1f109968e17fefda361bf87951afbb844a51ffd6011d18f0851552d76-merged.mount: Deactivated successfully.
Oct  9 11:00:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 11:00:39 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454360937' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 11:00:39 compute-0 objective_dirac[24982]: 
Oct  9 11:00:39 compute-0 objective_dirac[24982]: {"fsid":"e990987d-9393-5e96-99ae-9e3a3319f191","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":77,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1760007582,"num_in_osds":3,"osd_in_since":1760007566,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":89047040,"bytes_avail":64322879488,"bytes_total":64411926528,"read_bytes_sec":30028,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-10-09T11:00:30:843595+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-10-09T11:00:04.618436+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.izrudc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.rtiqvm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.agiurv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14373":{"start_epoch":3,"start_stamp":"2025-10-09T11:00:02.308498+0000","gid":14373,"addr":"192.168.122.100:0/1576514846","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.cjdyiw","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864100","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"1063f874-5e69-4914-9198-c2cdfb8f2870","zone_name":"default","zonegroup_id":"59510648-2c54-408c-beb4-010e0f01e98d","zonegroup_name":"default"},"task_status":{}},"24125":{"start_epoch":3,"start_stamp":"2025-10-09T11:00:02.315628+0000","gid":24125,"addr":"192.168.122.101:0/1032475736","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.vbxein","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864108","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"1063f874-5e69-4914-9198-c2cdfb8f2870","zone_name":"default","zonegroup_id":"59510648-2c54-408c-beb4-010e0f01e98d","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":4,"start_stamp":"2025-10-09T11:00:02.811156+0000","gid":24148,"addr":"192.168.122.102:0/330463100","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.klwwrz","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864100","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"1063f874-5e69-4914-9198-c2cdfb8f2870","zone_name":"default","zonegroup_id":"59510648-2c54-408c-beb4-010e0f01e98d","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"85883017-c07e-4617-ae79-374cec13bcc3":{"message":"Updating node-exporter deployment (+1 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct  9 11:00:39 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Oct  9 11:00:39 compute-0 podman[25044]: 2025-10-09 11:00:39.052737308 +0000 UTC m=+0.190590375 container remove 523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Oct  9 11:00:39 compute-0 systemd[1]: libpod-05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a.scope: Deactivated successfully.
Oct  9 11:00:39 compute-0 podman[24938]: 2025-10-09 11:00:39.055472716 +0000 UTC m=+0.586803814 container died 05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a (image=quay.io/ceph/ceph:v19, name=objective_dirac, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:39 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Oct  9 11:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c054cfcbe252000d04003e53a6c2b70f0509d7d6c688b70095abb5fe2e60b7ed-merged.mount: Deactivated successfully.
Oct  9 11:00:39 compute-0 podman[24938]: 2025-10-09 11:00:39.107907825 +0000 UTC m=+0.639238923 container remove 05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a (image=quay.io/ceph/ceph:v19, name=objective_dirac, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:39 compute-0 systemd[1]: libpod-conmon-523d4874e19e2a61dedeeaa02bf9a43c51d4dbbffe6c957913fdb9c49c9a1d2a.scope: Deactivated successfully.
Oct  9 11:00:39 compute-0 systemd[1]: libpod-conmon-05a6f2775de0265259ce97be439dcc2a9af9c43ef1549a8ac3763feb3bc7d25a.scope: Deactivated successfully.
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.194761306 +0000 UTC m=+0.040293650 container create d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tu, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.175704136 +0000 UTC m=+0.021236500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:39 compute-0 systemd[1]: Started libpod-conmon-d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d.scope.
Oct  9 11:00:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974c51fe58ff9eeea94d1c1a25ce65e12f6e3be3cd42261833fc7f7892d9c0eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974c51fe58ff9eeea94d1c1a25ce65e12f6e3be3cd42261833fc7f7892d9c0eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974c51fe58ff9eeea94d1c1a25ce65e12f6e3be3cd42261833fc7f7892d9c0eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974c51fe58ff9eeea94d1c1a25ce65e12f6e3be3cd42261833fc7f7892d9c0eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974c51fe58ff9eeea94d1c1a25ce65e12f6e3be3cd42261833fc7f7892d9c0eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.333560922 +0000 UTC m=+0.179093296 container init d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.338874382 +0000 UTC m=+0.184406726 container start d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tu, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.344479362 +0000 UTC m=+0.190011716 container attach d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 11:00:39 compute-0 python3[25138]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:39 compute-0 podman[25144]: 2025-10-09 11:00:39.462599665 +0000 UTC m=+0.035695315 container create fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74 (image=quay.io/ceph/ceph:v19, name=bold_euler, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:39 compute-0 systemd[1]: Started libpod-conmon-fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74.scope.
Oct  9 11:00:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5b5db730305fec5e0d0c126f72b8c54584f88425ad76615b0d579e6a7b5029/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5b5db730305fec5e0d0c126f72b8c54584f88425ad76615b0d579e6a7b5029/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:39 compute-0 podman[25144]: 2025-10-09 11:00:39.527370088 +0000 UTC m=+0.100465768 container init fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74 (image=quay.io/ceph/ceph:v19, name=bold_euler, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 11:00:39 compute-0 podman[25144]: 2025-10-09 11:00:39.536972636 +0000 UTC m=+0.110068286 container start fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74 (image=quay.io/ceph/ceph:v19, name=bold_euler, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:39 compute-0 podman[25144]: 2025-10-09 11:00:39.540039455 +0000 UTC m=+0.113135125 container attach fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74 (image=quay.io/ceph/ceph:v19, name=bold_euler, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 11:00:39 compute-0 podman[25144]: 2025-10-09 11:00:39.447990227 +0000 UTC m=+0.021085897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:39 compute-0 clever_tu[25139]: --> passed data devices: 0 physical, 1 LVM
Oct  9 11:00:39 compute-0 clever_tu[25139]: --> All data devices are unavailable
Oct  9 11:00:39 compute-0 systemd[1]: libpod-d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d.scope: Deactivated successfully.
Oct  9 11:00:39 compute-0 conmon[25139]: conmon d938f915337b35f791b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d.scope/container/memory.events
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.67539387 +0000 UTC m=+0.520926214 container died d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 11:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-974c51fe58ff9eeea94d1c1a25ce65e12f6e3be3cd42261833fc7f7892d9c0eb-merged.mount: Deactivated successfully.
Oct  9 11:00:39 compute-0 podman[25097]: 2025-10-09 11:00:39.72285835 +0000 UTC m=+0.568390744 container remove d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 11:00:39 compute-0 systemd[1]: libpod-conmon-d938f915337b35f791b8e3c7934d52ea49e0f763ce5b17691a5de981faae101d.scope: Deactivated successfully.
Oct  9 11:00:39 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct  9 11:00:39 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 12 completed events
Oct  9 11:00:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:00:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 11:00:39 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363350816' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 11:00:39 compute-0 bold_euler[25159]: 
Oct  9 11:00:39 compute-0 bold_euler[25159]: {"epoch":3,"fsid":"e990987d-9393-5e96-99ae-9e3a3319f191","modified":"2025-10-09T10:59:16.540045Z","created":"2025-10-09T10:57:14.796633Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct  9 11:00:39 compute-0 bold_euler[25159]: dumped monmap epoch 3
Oct  9 11:00:39 compute-0 systemd[1]: libpod-fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74.scope: Deactivated successfully.
Oct  9 11:00:39 compute-0 podman[25144]: 2025-10-09 11:00:39.997733063 +0000 UTC m=+0.570828743 container died fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74 (image=quay.io/ceph/ceph:v19, name=bold_euler, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f5b5db730305fec5e0d0c126f72b8c54584f88425ad76615b0d579e6a7b5029-merged.mount: Deactivated successfully.
Oct  9 11:00:40 compute-0 podman[25144]: 2025-10-09 11:00:40.034322234 +0000 UTC m=+0.607417884 container remove fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74 (image=quay.io/ceph/ceph:v19, name=bold_euler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 11:00:40 compute-0 systemd[1]: libpod-conmon-fd70f8c621f394109b31bd3d3c4bd91351d9d7af7a7c2a99a341ab4286242d74.scope: Deactivated successfully.
Oct  9 11:00:40 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  9 11:00:40 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.228544395 +0000 UTC m=+0.034654371 container create c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 11:00:40 compute-0 systemd[1]: Started libpod-conmon-c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5.scope.
Oct  9 11:00:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.299217529 +0000 UTC m=+0.105327525 container init c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.305463249 +0000 UTC m=+0.111573225 container start c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.308234828 +0000 UTC m=+0.114344834 container attach c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 11:00:40 compute-0 goofy_wescoff[25325]: 167 167
Oct  9 11:00:40 compute-0 systemd[1]: libpod-c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5.scope: Deactivated successfully.
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.212618815 +0000 UTC m=+0.018728821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:40 compute-0 conmon[25325]: conmon c5663bf144f2c734c54e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5.scope/container/memory.events
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.310984796 +0000 UTC m=+0.117094792 container died c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ced5ed1b0ca58ff38c1110fe7db76baf0115f037dcdebf27a95791d6b7acdd7-merged.mount: Deactivated successfully.
Oct  9 11:00:40 compute-0 podman[25308]: 2025-10-09 11:00:40.341782692 +0000 UTC m=+0.147892668 container remove c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 11:00:40 compute-0 systemd[1]: libpod-conmon-c5663bf144f2c734c54e64c1852b52e5dd0e33b319a3ad780bbbdc8eb50ca5c5.scope: Deactivated successfully.
Oct  9 11:00:40 compute-0 podman[25350]: 2025-10-09 11:00:40.484633017 +0000 UTC m=+0.039954301 container create d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:40 compute-0 systemd[1]: Started libpod-conmon-d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9.scope.
Oct  9 11:00:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d6faadcd0536c8a630b1366e0aede1d44a6489c351aa49f35d56cc8d6820e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d6faadcd0536c8a630b1366e0aede1d44a6489c351aa49f35d56cc8d6820e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d6faadcd0536c8a630b1366e0aede1d44a6489c351aa49f35d56cc8d6820e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d6faadcd0536c8a630b1366e0aede1d44a6489c351aa49f35d56cc8d6820e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:40 compute-0 podman[25350]: 2025-10-09 11:00:40.557992786 +0000 UTC m=+0.113314080 container init d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:40 compute-0 podman[25350]: 2025-10-09 11:00:40.467294412 +0000 UTC m=+0.022615746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:40 compute-0 podman[25350]: 2025-10-09 11:00:40.567399857 +0000 UTC m=+0.122721141 container start d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_tu, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:40 compute-0 podman[25350]: 2025-10-09 11:00:40.570561659 +0000 UTC m=+0.125882973 container attach d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Oct  9 11:00:40 compute-0 python3[25389]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:40 compute-0 podman[25397]: 2025-10-09 11:00:40.693159175 +0000 UTC m=+0.042793302 container create ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277 (image=quay.io/ceph/ceph:v19, name=ecstatic_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:40 compute-0 systemd[1]: Started libpod-conmon-ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277.scope.
Oct  9 11:00:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c7182c7c9b7722b7b5cb9f7ec9222f18526429695c5bb6e36a8246d7b7baf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c7182c7c9b7722b7b5cb9f7ec9222f18526429695c5bb6e36a8246d7b7baf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:40 compute-0 podman[25397]: 2025-10-09 11:00:40.766324558 +0000 UTC m=+0.115958715 container init ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277 (image=quay.io/ceph/ceph:v19, name=ecstatic_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:40 compute-0 podman[25397]: 2025-10-09 11:00:40.674986894 +0000 UTC m=+0.024621041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:40 compute-0 podman[25397]: 2025-10-09 11:00:40.772747114 +0000 UTC m=+0.122381241 container start ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277 (image=quay.io/ceph/ceph:v19, name=ecstatic_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:40 compute-0 podman[25397]: 2025-10-09 11:00:40.775836003 +0000 UTC m=+0.125470160 container attach ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277 (image=quay.io/ceph/ceph:v19, name=ecstatic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:40 compute-0 crazy_tu[25392]: {
Oct  9 11:00:40 compute-0 crazy_tu[25392]:    "0": [
Oct  9 11:00:40 compute-0 crazy_tu[25392]:        {
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "devices": [
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "/dev/loop3"
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            ],
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "lv_name": "ceph_lv0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "lv_size": "21470642176",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e990987d-9393-5e96-99ae-9e3a3319f191,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0ea02d81-16d9-4b32-9888-cc7ebc83243e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "lv_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "name": "ceph_lv0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "tags": {
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.block_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.cephx_lockbox_secret": "",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.cluster_fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.cluster_name": "ceph",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.crush_device_class": "",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.encrypted": "0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.osd_fsid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.osd_id": "0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.type": "block",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.vdo": "0",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:                "ceph.with_tpm": "0"
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            },
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "type": "block",
Oct  9 11:00:40 compute-0 crazy_tu[25392]:            "vg_name": "ceph_vg0"
Oct  9 11:00:40 compute-0 crazy_tu[25392]:        }
Oct  9 11:00:40 compute-0 crazy_tu[25392]:    ]
Oct  9 11:00:40 compute-0 crazy_tu[25392]: }
Oct  9 11:00:40 compute-0 systemd[1]: libpod-d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9.scope: Deactivated successfully.
Oct  9 11:00:40 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:40 compute-0 podman[25423]: 2025-10-09 11:00:40.910016101 +0000 UTC m=+0.021813801 container died d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 11:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9d6faadcd0536c8a630b1366e0aede1d44a6489c351aa49f35d56cc8d6820e2-merged.mount: Deactivated successfully.
Oct  9 11:00:40 compute-0 podman[25423]: 2025-10-09 11:00:40.954373921 +0000 UTC m=+0.066171601 container remove d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct  9 11:00:40 compute-0 systemd[1]: libpod-conmon-d81d8f532cf9d82cd1f7a9ee5ff6a3f3b3342d242e36708909d9e459bbb6aba9.scope: Deactivated successfully.
Oct  9 11:00:41 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct  9 11:00:41 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct  9 11:00:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct  9 11:00:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3409242971' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  9 11:00:41 compute-0 ecstatic_heisenberg[25414]: [client.openstack]
Oct  9 11:00:41 compute-0 ecstatic_heisenberg[25414]: #011key = AQDplOdoAAAAABAAuU/oCCe/0azXP2JCSUHvGQ==
Oct  9 11:00:41 compute-0 ecstatic_heisenberg[25414]: #011caps mgr = "allow *"
Oct  9 11:00:41 compute-0 ecstatic_heisenberg[25414]: #011caps mon = "profile rbd"
Oct  9 11:00:41 compute-0 ecstatic_heisenberg[25414]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  9 11:00:41 compute-0 systemd[1]: libpod-ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277.scope: Deactivated successfully.
Oct  9 11:00:41 compute-0 podman[25397]: 2025-10-09 11:00:41.214166611 +0000 UTC m=+0.563800748 container died ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277 (image=quay.io/ceph/ceph:v19, name=ecstatic_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 11:00:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3c7182c7c9b7722b7b5cb9f7ec9222f18526429695c5bb6e36a8246d7b7baf-merged.mount: Deactivated successfully.
Oct  9 11:00:41 compute-0 podman[25397]: 2025-10-09 11:00:41.25193125 +0000 UTC m=+0.601565377 container remove ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277 (image=quay.io/ceph/ceph:v19, name=ecstatic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Oct  9 11:00:41 compute-0 systemd[1]: libpod-conmon-ec89fbaf86dca1f204e3b9dba1bbba1e66fb089c4c1ed581d7b698505ba87277.scope: Deactivated successfully.
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.455238672 +0000 UTC m=+0.035191498 container create 5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 11:00:41 compute-0 systemd[1]: Started libpod-conmon-5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937.scope.
Oct  9 11:00:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.511531175 +0000 UTC m=+0.091484031 container init 5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.516572636 +0000 UTC m=+0.096525462 container start 5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 11:00:41 compute-0 interesting_rhodes[25575]: 167 167
Oct  9 11:00:41 compute-0 systemd[1]: libpod-5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937.scope: Deactivated successfully.
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.520340917 +0000 UTC m=+0.100293763 container attach 5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.520650956 +0000 UTC m=+0.100603782 container died 5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.438646181 +0000 UTC m=+0.018599027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81a2e58e5069d94065a745a50508136c9b5b35c0b8f8e68f26f8b9719781c9a-merged.mount: Deactivated successfully.
Oct  9 11:00:41 compute-0 podman[25559]: 2025-10-09 11:00:41.54950591 +0000 UTC m=+0.129458736 container remove 5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:41 compute-0 systemd[1]: libpod-conmon-5177b8f4d031e0eea09c3ee55e970c6b212acbded404791e5d42304e86ce5937.scope: Deactivated successfully.
Oct  9 11:00:41 compute-0 podman[25599]: 2025-10-09 11:00:41.72808964 +0000 UTC m=+0.070847099 container create 3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:41 compute-0 systemd[1]: Started libpod-conmon-3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe.scope.
Oct  9 11:00:41 compute-0 podman[25599]: 2025-10-09 11:00:41.681593621 +0000 UTC m=+0.024351100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26fbc0981e35e5332fe6c1db9ce7ecbbf3f0d8e52a6cdf5c92a73da32969574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26fbc0981e35e5332fe6c1db9ce7ecbbf3f0d8e52a6cdf5c92a73da32969574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26fbc0981e35e5332fe6c1db9ce7ecbbf3f0d8e52a6cdf5c92a73da32969574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f26fbc0981e35e5332fe6c1db9ce7ecbbf3f0d8e52a6cdf5c92a73da32969574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:41 compute-0 podman[25599]: 2025-10-09 11:00:41.80116135 +0000 UTC m=+0.143918849 container init 3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 11:00:41 compute-0 podman[25599]: 2025-10-09 11:00:41.808298099 +0000 UTC m=+0.151055548 container start 3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:41 compute-0 podman[25599]: 2025-10-09 11:00:41.811082039 +0000 UTC m=+0.153839538 container attach 3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:41 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct  9 11:00:41 compute-0 ceph-mon[4705]: from='client.? 192.168.122.100:0/3409242971' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  9 11:00:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  9 11:00:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  9 11:00:42 compute-0 lvm[25769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 11:00:42 compute-0 lvm[25769]: VG ceph_vg0 finished
Oct  9 11:00:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:42 compute-0 boring_elbakyan[25616]: {}
Oct  9 11:00:42 compute-0 systemd[1]: libpod-3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe.scope: Deactivated successfully.
Oct  9 11:00:42 compute-0 systemd[1]: libpod-3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe.scope: Consumed 1.054s CPU time.
Oct  9 11:00:42 compute-0 podman[25599]: 2025-10-09 11:00:42.517840412 +0000 UTC m=+0.860597871 container died 3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f26fbc0981e35e5332fe6c1db9ce7ecbbf3f0d8e52a6cdf5c92a73da32969574-merged.mount: Deactivated successfully.
Oct  9 11:00:42 compute-0 podman[25599]: 2025-10-09 11:00:42.571429039 +0000 UTC m=+0.914186498 container remove 3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:42 compute-0 systemd[1]: libpod-conmon-3bee49c59f2d4851d93564deb81c326190464117b219c9d412c9e5e5b3e595fe.scope: Deactivated successfully.
Oct  9 11:00:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:42 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev de154028-0f6b-4d70-a680-09788ae5568d (Updating mds.cephfs deployment (+3 -> 3))
Oct  9 11:00:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  9 11:00:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 11:00:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 11:00:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:42 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:42 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.brbiqj on compute-2
Oct  9 11:00:42 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.brbiqj on compute-2
Oct  9 11:00:42 compute-0 ansible-async_wrapper.py[25854]: Invoked with j703157178236 30 /home/zuul/.ansible/tmp/ansible-tmp-1760007642.2518425-34071-219599543365573/AnsiballZ_command.py _
Oct  9 11:00:42 compute-0 ansible-async_wrapper.py[25857]: Starting module and watcher
Oct  9 11:00:42 compute-0 ansible-async_wrapper.py[25857]: Start watching 25858 (30)
Oct  9 11:00:42 compute-0 ansible-async_wrapper.py[25858]: Start module (25858)
Oct  9 11:00:42 compute-0 ansible-async_wrapper.py[25854]: Return async_wrapper task started.
Oct  9 11:00:42 compute-0 python3[25859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:42 compute-0 podman[25860]: 2025-10-09 11:00:42.910740186 +0000 UTC m=+0.038319638 container create 876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e (image=quay.io/ceph/ceph:v19, name=heuristic_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 11:00:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 11:00:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 11:00:42 compute-0 systemd[1]: Started libpod-conmon-876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e.scope.
Oct  9 11:00:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccd52f9f327517f675fc2078d2f178aaf8b577cbdded29f8539a49b37cb1f1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccd52f9f327517f675fc2078d2f178aaf8b577cbdded29f8539a49b37cb1f1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:42 compute-0 podman[25860]: 2025-10-09 11:00:42.982923027 +0000 UTC m=+0.110502499 container init 876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e (image=quay.io/ceph/ceph:v19, name=heuristic_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 11:00:42 compute-0 podman[25860]: 2025-10-09 11:00:42.989044384 +0000 UTC m=+0.116623836 container start 876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e (image=quay.io/ceph/ceph:v19, name=heuristic_satoshi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:42 compute-0 podman[25860]: 2025-10-09 11:00:42.894834637 +0000 UTC m=+0.022414109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:42 compute-0 podman[25860]: 2025-10-09 11:00:42.992308988 +0000 UTC m=+0.119888450 container attach 876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e (image=quay.io/ceph/ceph:v19, name=heuristic_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 11:00:43 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  9 11:00:43 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  9 11:00:43 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 11:00:43 compute-0 heuristic_satoshi[25876]: 
Oct  9 11:00:43 compute-0 heuristic_satoshi[25876]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 11:00:43 compute-0 systemd[1]: libpod-876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e.scope: Deactivated successfully.
Oct  9 11:00:43 compute-0 podman[25860]: 2025-10-09 11:00:43.359531159 +0000 UTC m=+0.487110611 container died 876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e (image=quay.io/ceph/ceph:v19, name=heuristic_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 11:00:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ccd52f9f327517f675fc2078d2f178aaf8b577cbdded29f8539a49b37cb1f1d-merged.mount: Deactivated successfully.
Oct  9 11:00:43 compute-0 podman[25860]: 2025-10-09 11:00:43.394116356 +0000 UTC m=+0.521695808 container remove 876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e (image=quay.io/ceph/ceph:v19, name=heuristic_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:43 compute-0 systemd[1]: libpod-conmon-876b2cf4d04dcbc1dd73056a2565096f4148239a0e4f91ac585813b0ec1df26e.scope: Deactivated successfully.
Oct  9 11:00:43 compute-0 ansible-async_wrapper.py[25858]: Module complete (25858)
Oct  9 11:00:43 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Oct  9 11:00:43 compute-0 ceph-mon[4705]: Deploying daemon mds.cephfs.compute-2.brbiqj on compute-2
Oct  9 11:00:44 compute-0 python3[25960]: ansible-ansible.legacy.async_status Invoked with jid=j703157178236.25854 mode=status _async_dir=/root/.ansible_async
Oct  9 11:00:44 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  9 11:00:44 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:44 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.aesial on compute-0
Oct  9 11:00:44 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.aesial on compute-0
Oct  9 11:00:44 compute-0 python3[26009]: ansible-ansible.legacy.async_status Invoked with jid=j703157178236.25854 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.661881468 +0000 UTC m=+0.035885470 container create 7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 11:00:44 compute-0 systemd[1]: Started libpod-conmon-7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9.scope.
Oct  9 11:00:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.7271795 +0000 UTC m=+0.101183522 container init 7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.734634388 +0000 UTC m=+0.108638370 container start 7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:44 compute-0 affectionate_driscoll[26140]: 167 167
Oct  9 11:00:44 compute-0 systemd[1]: libpod-7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9.scope: Deactivated successfully.
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.739145153 +0000 UTC m=+0.113149135 container attach 7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.740424184 +0000 UTC m=+0.114428166 container died 7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.646760755 +0000 UTC m=+0.020764767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1236cdb98b25f6750ca514aca01338783c91f06d71e92fb3c660f5851d88105-merged.mount: Deactivated successfully.
Oct  9 11:00:44 compute-0 podman[26099]: 2025-10-09 11:00:44.775383774 +0000 UTC m=+0.149387756 container remove 7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 11:00:44 compute-0 systemd[1]: libpod-conmon-7908a8d032775b379b432f0974e4d77ff0f828666a4b0336825ec860ed08ccf9.scope: Deactivated successfully.
Oct  9 11:00:44 compute-0 systemd[1]: Reloading.
Oct  9 11:00:44 compute-0 python3[26144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:44 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:00:44 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:00:44 compute-0 podman[26161]: 2025-10-09 11:00:44.911284376 +0000 UTC m=+0.042163392 container create e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72 (image=quay.io/ceph/ceph:v19, name=kind_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 11:00:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 11:00:44 compute-0 ceph-mon[4705]: Deploying daemon mds.cephfs.compute-0.aesial on compute-0
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e3 new map
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-10-09T11:00:44:961012+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:30.843558+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.brbiqj{-1:24211} state up:standby seq 1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] up:boot
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] as mds.0
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.brbiqj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.brbiqj"} v 0)
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.brbiqj"}]: dispatch
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e3 all = 0
Oct  9 11:00:44 compute-0 podman[26161]: 2025-10-09 11:00:44.89829512 +0000 UTC m=+0.029174156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e4 new map
Oct  9 11:00:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-10-09T11:00:44:984626+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:44.984620+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.brbiqj{0:24211} state up:creating seq 1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct  9 11:00:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:creating}
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.brbiqj is now active in filesystem cephfs as rank 0
Oct  9 11:00:45 compute-0 systemd[1]: Started libpod-conmon-e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72.scope.
Oct  9 11:00:45 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  9 11:00:45 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  9 11:00:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183505738e5f4e724317567ef80ac1b842f0e3ea86ac3821816c769127edd4e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183505738e5f4e724317567ef80ac1b842f0e3ea86ac3821816c769127edd4e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:45 compute-0 systemd[1]: Reloading.
Oct  9 11:00:45 compute-0 podman[26161]: 2025-10-09 11:00:45.112976455 +0000 UTC m=+0.243855491 container init e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72 (image=quay.io/ceph/ceph:v19, name=kind_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 11:00:45 compute-0 podman[26161]: 2025-10-09 11:00:45.123654817 +0000 UTC m=+0.254533833 container start e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72 (image=quay.io/ceph/ceph:v19, name=kind_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 11:00:45 compute-0 podman[26161]: 2025-10-09 11:00:45.127005025 +0000 UTC m=+0.257884031 container attach e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72 (image=quay.io/ceph/ceph:v19, name=kind_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 11:00:45 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:00:45 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:00:45 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.aesial for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:00:45 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 11:00:45 compute-0 kind_williams[26212]: 
Oct  9 11:00:45 compute-0 kind_williams[26212]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 11:00:45 compute-0 systemd[1]: libpod-e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72.scope: Deactivated successfully.
Oct  9 11:00:45 compute-0 podman[26161]: 2025-10-09 11:00:45.496889501 +0000 UTC m=+0.627768517 container died e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72 (image=quay.io/ceph/ceph:v19, name=kind_williams, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-183505738e5f4e724317567ef80ac1b842f0e3ea86ac3821816c769127edd4e7-merged.mount: Deactivated successfully.
Oct  9 11:00:45 compute-0 podman[26161]: 2025-10-09 11:00:45.540834788 +0000 UTC m=+0.671713804 container remove e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72 (image=quay.io/ceph/ceph:v19, name=kind_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 11:00:45 compute-0 systemd[1]: libpod-conmon-e247bf1dfa616cfec28366dbac71800e45d3d66a47b339bba29479d02d64ae72.scope: Deactivated successfully.
Oct  9 11:00:45 compute-0 podman[26329]: 2025-10-09 11:00:45.570354084 +0000 UTC m=+0.044937690 container create c8c42f3c2f744ec2303a269e50e9b51e44db374af4c5f5bd6cd2376b69f40805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-0-aesial, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64f138d5fe783ba33701c1e0bcfb5cec3c7cd158ff1a422b162d96075ee98e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64f138d5fe783ba33701c1e0bcfb5cec3c7cd158ff1a422b162d96075ee98e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64f138d5fe783ba33701c1e0bcfb5cec3c7cd158ff1a422b162d96075ee98e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64f138d5fe783ba33701c1e0bcfb5cec3c7cd158ff1a422b162d96075ee98e0/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.aesial supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:45 compute-0 podman[26329]: 2025-10-09 11:00:45.626198612 +0000 UTC m=+0.100782238 container init c8c42f3c2f744ec2303a269e50e9b51e44db374af4c5f5bd6cd2376b69f40805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-0-aesial, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:45 compute-0 podman[26329]: 2025-10-09 11:00:45.631476581 +0000 UTC m=+0.106060177 container start c8c42f3c2f744ec2303a269e50e9b51e44db374af4c5f5bd6cd2376b69f40805 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-0-aesial, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 11:00:45 compute-0 bash[26329]: c8c42f3c2f744ec2303a269e50e9b51e44db374af4c5f5bd6cd2376b69f40805
Oct  9 11:00:45 compute-0 podman[26329]: 2025-10-09 11:00:45.552359637 +0000 UTC m=+0.026943273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:45 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.aesial for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:00:45 compute-0 ceph-mds[26351]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 11:00:45 compute-0 ceph-mds[26351]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct  9 11:00:45 compute-0 ceph-mds[26351]: main not setting numa affinity
Oct  9 11:00:45 compute-0 ceph-mds[26351]: pidfile_write: ignore empty --pid-file
Oct  9 11:00:45 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-0-aesial[26347]: starting mds.cephfs.compute-0.aesial at 
Oct  9 11:00:45 compute-0 ceph-mds[26351]: mds.cephfs.compute-0.aesial Updating MDS map to version 4 from mon.0
Oct  9 11:00:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 11:00:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:45 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.yzkqil on compute-1
Oct  9 11:00:45 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.yzkqil on compute-1
Oct  9 11:00:45 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct  9 11:00:45 compute-0 ceph-mon[4705]: daemon mds.cephfs.compute-2.brbiqj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  9 11:00:45 compute-0 ceph-mon[4705]: Cluster is now healthy
Oct  9 11:00:45 compute-0 ceph-mon[4705]: daemon mds.cephfs.compute-2.brbiqj is now active in filesystem cephfs as rank 0
Oct  9 11:00:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 11:00:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 11:00:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e5 new map
Oct  9 11:00:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-10-09T11:00:45:996712+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:45.996709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 11:00:46 compute-0 ceph-mds[26351]: mds.cephfs.compute-0.aesial Updating MDS map to version 5 from mon.0
Oct  9 11:00:46 compute-0 ceph-mds[26351]: mds.cephfs.compute-0.aesial Monitors have assigned me to become a standby
Oct  9 11:00:46 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] up:active
Oct  9 11:00:46 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] up:boot
Oct  9 11:00:46 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 1 up:standby
Oct  9 11:00:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.aesial"} v 0)
Oct  9 11:00:46 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.aesial"}]: dispatch
Oct  9 11:00:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e5 all = 0
Oct  9 11:00:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e6 new map
Oct  9 11:00:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-10-09T11:00:46:009915+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:45.996709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 11:00:46 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 1 up:standby
Oct  9 11:00:46 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  9 11:00:46 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  9 11:00:46 compute-0 python3[26396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:46 compute-0 podman[26397]: 2025-10-09 11:00:46.516446164 +0000 UTC m=+0.081374997 container create 06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888 (image=quay.io/ceph/ceph:v19, name=jovial_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 11:00:46 compute-0 systemd[1]: Started libpod-conmon-06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888.scope.
Oct  9 11:00:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:46 compute-0 podman[26397]: 2025-10-09 11:00:46.49788414 +0000 UTC m=+0.062812993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f7bf92a36f1c6de54f075b495d5846109ef4a9affa26c17ffac420e1c4d403/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f7bf92a36f1c6de54f075b495d5846109ef4a9affa26c17ffac420e1c4d403/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:46 compute-0 podman[26397]: 2025-10-09 11:00:46.610253308 +0000 UTC m=+0.175182161 container init 06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888 (image=quay.io/ceph/ceph:v19, name=jovial_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:46 compute-0 podman[26397]: 2025-10-09 11:00:46.618726599 +0000 UTC m=+0.183655432 container start 06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888 (image=quay.io/ceph/ceph:v19, name=jovial_turing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:46 compute-0 podman[26397]: 2025-10-09 11:00:46.622397947 +0000 UTC m=+0.187326830 container attach 06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888 (image=quay.io/ceph/ceph:v19, name=jovial_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 11:00:46 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 11:00:46 compute-0 jovial_turing[26412]: 
Oct  9 11:00:46 compute-0 jovial_turing[26412]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  9 11:00:46 compute-0 systemd[1]: libpod-06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888.scope: Deactivated successfully.
Oct  9 11:00:47 compute-0 podman[26437]: 2025-10-09 11:00:47.00872759 +0000 UTC m=+0.022198152 container died 06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888 (image=quay.io/ceph/ceph:v19, name=jovial_turing, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: Deploying daemon mds.cephfs.compute-1.yzkqil on compute-1
Oct  9 11:00:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f7bf92a36f1c6de54f075b495d5846109ef4a9affa26c17ffac420e1c4d403-merged.mount: Deactivated successfully.
Oct  9 11:00:47 compute-0 podman[26437]: 2025-10-09 11:00:47.042822052 +0000 UTC m=+0.056292594 container remove 06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888 (image=quay.io/ceph/ceph:v19, name=jovial_turing, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  9 11:00:47 compute-0 systemd[1]: libpod-conmon-06dc194f034a80ab8c537338a9bf25afb1841b10739d65d516ab9be80859f888.scope: Deactivated successfully.
Oct  9 11:00:47 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  9 11:00:47 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev de154028-0f6b-4d70-a680-09788ae5568d (Updating mds.cephfs deployment (+3 -> 3))
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event de154028-0f6b-4d70-a680-09788ae5568d (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 3fc4d835-3043-4db4-8f30-6e36e78e0af4 (Updating nfs.cephfs deployment (+3 -> 3))
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.cjtqwz's ganesha conf is defaulting to empty
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.cjtqwz's ganesha conf is defaulting to empty
Oct  9 11:00:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:47 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.cjtqwz on compute-1
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.cjtqwz on compute-1
Oct  9 11:00:47 compute-0 ansible-async_wrapper.py[25857]: Done in kid B.
Oct  9 11:00:47 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Oct  9 11:00:47 compute-0 python3[26511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:48.014512402 +0000 UTC m=+0.036288674 container create 3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842 (image=quay.io/ceph/ceph:v19, name=distracted_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e7 new map
Oct  9 11:00:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-10-09T11:00:48:014896+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:45.996709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.yzkqil{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 11:00:48 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] up:boot
Oct  9 11:00:48 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 2 up:standby
Oct  9 11:00:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.yzkqil"} v 0)
Oct  9 11:00:48 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.yzkqil"}]: dispatch
Oct  9 11:00:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e7 all = 0
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 11:00:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 11:00:48 compute-0 systemd[1]: Started libpod-conmon-3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842.scope.
Oct  9 11:00:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f0002bbb74ee019a13658111e58528d74574590c0a96e4e29ad95e734218581/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f0002bbb74ee019a13658111e58528d74574590c0a96e4e29ad95e734218581/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:48.080692621 +0000 UTC m=+0.102468893 container init 3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842 (image=quay.io/ceph/ceph:v19, name=distracted_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:48.086818227 +0000 UTC m=+0.108594499 container start 3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842 (image=quay.io/ceph/ceph:v19, name=distracted_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:48.090316059 +0000 UTC m=+0.112092361 container attach 3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842 (image=quay.io/ceph/ceph:v19, name=distracted_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:47.999361796 +0000 UTC m=+0.021138088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:48 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 11:00:48 compute-0 distracted_rhodes[26527]: 
Oct  9 11:00:48 compute-0 distracted_rhodes[26527]: [{"container_id": "29c3c53cfe8b", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.11%", "created": "2025-10-09T10:57:53.800814Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-09T11:00:31.924961Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-10-09T10:57:53.697647Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@crash.compute-0", "version": "19.2.3"}, {"container_id": "74dfaf8dd93c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.41%", "created": "2025-10-09T10:58:33.230409Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-09T11:00:31.686081Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-10-09T10:58:31.677596Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@crash.compute-1", "version": "19.2.3"}, {"container_id": "297b505cc29c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.23%", "created": "2025-10-09T10:59:24.963942Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-09T11:00:31.451478Z", "memory_usage": 7808745, "ports": [], "service_name": "crash", "started": "2025-10-09T10:59:24.882674Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.aesial", "daemon_name": "mds.cephfs.compute-0.aesial", "daemon_type": "mds", "events": ["2025-10-09T11:00:45.716350Z daemon:mds.cephfs.compute-0.aesial [INFO] \"Deployed mds.cephfs.compute-0.aesial on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-1.yzkqil", "daemon_name": "mds.cephfs.compute-1.yzkqil", "daemon_type": "mds", "events": ["2025-10-09T11:00:47.460820Z daemon:mds.cephfs.compute-1.yzkqil [INFO] \"Deployed mds.cephfs.compute-1.yzkqil on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.brbiqj", "daemon_name": "mds.cephfs.compute-2.brbiqj", "daemon_type": "mds", "events": ["2025-10-09T11:00:44.174493Z daemon:mds.cephfs.compute-2.brbiqj [INFO] \"Deployed mds.cephfs.compute-2.brbiqj on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "00875a7cafe3", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.73%", "created": "2025-10-09T10:57:20.328842Z", "daemon_id": "compute-0.izrudc", "daemon_name": "mgr.compute-0.izrudc", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-09T11:00:31.924823Z", "memory_usage": 541589504, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-09T10:57:20.190412Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@mgr.compute-0.izrudc", "version": "19.2.3"}, {"container_id": "ac4a3ed45060", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "39.83%", "created": "2025-10-09T10:59:23.432706Z", "daemon_id": "compute-1.rtiqvm", "daemon_name": "mgr.compute-1.rtiqvm", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-09T11:00:31.686356Z", "memory_usage": 504155340, "ports": [8765], "service_name": "mgr", "started": "2025-10-09T10:59:23.311383Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@mgr.compute-1.rtiqvm", "version": "19.2.3"}, {"container_id": "ddd2c1d76807", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "34.34%", "created": "2025-10-09T10:59:17.641694Z", "daemon_id": "compute-2.agiurv", "daemon_name": "mgr.compute-2.agiurv", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-09T11:00:31.451409Z", "memory_usage": 505518489, "ports": [8765], "service_name": "mgr", "started": "2025-10-09T10:59:17.499281Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@mgr.compute-2.agiurv", "version": "19.2.3"}, {"container_id": "704febf2c4e8", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.54%", "created": "2025-10-09T10:57:16.638723Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-09T11:00:31.924671Z", "memory_request": 2147483648, "memory_usage": 60901294, "ports": [], "service_name": "mon", "started": "2025-10-09T10:57:18.538458Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@mon.compute-0", "version": "19.2.3"}, {"container_id": "3e45636ce5dd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.00%", "created": "2025-10-09T10:59:14.407468Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-09T11:00:31.686281Z", "memory_request": 2147483648, "memory_usage": 47060090, "ports": [], "service_name": "mon", "started": "2025-10-09T10:59:14.292835Z", "status": 1, "status_des
Oct  9 11:00:48 compute-0 distracted_rhodes[26527]: : "2025-10-09T10:59:49.212765Z", "daemon_id": "rgw.compute-2.klwwrz", "daemon_name": "rgw.rgw.compute-2.klwwrz", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2025-10-09T11:00:31.451618Z", "memory_usage": 105277030, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-10-09T10:59:49.122155Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e990987d-9393-5e96-99ae-9e3a3319f191@rgw.rgw.compute-2.klwwrz", "version": "19.2.3"}]
Oct  9 11:00:48 compute-0 systemd[1]: libpod-3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842.scope: Deactivated successfully.
Oct  9 11:00:48 compute-0 conmon[26527]: conmon 3b3303f1980dc99abbd9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842.scope/container/memory.events
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:48.465690001 +0000 UTC m=+0.487466273 container died 3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842 (image=quay.io/ceph/ceph:v19, name=distracted_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 11:00:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f0002bbb74ee019a13658111e58528d74574590c0a96e4e29ad95e734218581-merged.mount: Deactivated successfully.
Oct  9 11:00:48 compute-0 podman[26512]: 2025-10-09 11:00:48.498510632 +0000 UTC m=+0.520286904 container remove 3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842 (image=quay.io/ceph/ceph:v19, name=distracted_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:48 compute-0 systemd[1]: libpod-conmon-3b3303f1980dc99abbd9991af7f976f56fbe856c9010eb104819b7e3404a3842.scope: Deactivated successfully.
Oct  9 11:00:48 compute-0 rsyslogd[1315]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "29c3c53cfe8b", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  9 11:00:49 compute-0 ceph-mon[4705]: Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz
Oct  9 11:00:49 compute-0 ceph-mon[4705]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 11:00:49 compute-0 ceph-mon[4705]: Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:49 compute-0 ceph-mon[4705]: Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw
Oct  9 11:00:49 compute-0 ceph-mon[4705]: Bind address in nfs.cephfs.0.0.compute-1.cjtqwz's ganesha conf is defaulting to empty
Oct  9 11:00:49 compute-0 ceph-mon[4705]: Deploying daemon nfs.cephfs.0.0.compute-1.cjtqwz on compute-1
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:49 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg
Oct  9 11:00:49 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 11:00:49 compute-0 ceph-mgr[4997]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 11:00:49 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 11:00:49 compute-0 python3[26590]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:49 compute-0 podman[26591]: 2025-10-09 11:00:49.523182369 +0000 UTC m=+0.038307268 container create 6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4 (image=quay.io/ceph/ceph:v19, name=frosty_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 11:00:49 compute-0 systemd[1]: Started libpod-conmon-6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4.scope.
Oct  9 11:00:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dc45ec7300deb9205616e098b795293882852d16c66f946f9dac1ebbde591/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dc45ec7300deb9205616e098b795293882852d16c66f946f9dac1ebbde591/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:49 compute-0 podman[26591]: 2025-10-09 11:00:49.577215259 +0000 UTC m=+0.092340158 container init 6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4 (image=quay.io/ceph/ceph:v19, name=frosty_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 11:00:49 compute-0 podman[26591]: 2025-10-09 11:00:49.582011723 +0000 UTC m=+0.097136622 container start 6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4 (image=quay.io/ceph/ceph:v19, name=frosty_wiles, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Oct  9 11:00:49 compute-0 podman[26591]: 2025-10-09 11:00:49.585333449 +0000 UTC m=+0.100458378 container attach 6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4 (image=quay.io/ceph/ceph:v19, name=frosty_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 11:00:49 compute-0 podman[26591]: 2025-10-09 11:00:49.507133575 +0000 UTC m=+0.022258494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e8 new map
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-10-09T11:00:49:727133+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:49.018608+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.yzkqil{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 11:00:49 compute-0 ceph-mds[26351]: mds.cephfs.compute-0.aesial Updating MDS map to version 8 from mon.0
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] up:active
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] up:standby
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 2 up:standby
Oct  9 11:00:49 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Oct  9 11:00:49 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 13 completed events
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 11:00:49 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248766397' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 11:00:49 compute-0 frosty_wiles[26622]: 
Oct  9 11:00:49 compute-0 frosty_wiles[26622]: {"fsid":"e990987d-9393-5e96-99ae-9e3a3319f191","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":88,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":52,"num_osds":3,"num_up_osds":3,"osd_up_since":1760007582,"num_in_osds":3,"osd_in_since":1760007566,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":216,"data_bytes":467025,"bytes_used":107565056,"bytes_avail":64304361472,"bytes_total":64411926528,"write_bytes_sec":1263,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":8,"btime":"2025-10-09T11:00:49:727133+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.brbiqj","status":"up:active","gid":24211}],"up:standby":2},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-10-09T11:00:04.618436+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.izrudc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.rtiqvm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.agiurv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14373":{"start_epoch":3,"start_stamp":"2025-10-09T11:00:02.308498+0000","gid":14373,"addr":"192.168.122.100:0/1576514846","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.cjdyiw","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864100","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"1063f874-5e69-4914-9198-c2cdfb8f2870","zone_name":"default","zonegroup_id":"59510648-2c54-408c-beb4-010e0f01e98d","zonegroup_name":"default"},"task_status":{}},"24125":{"start_epoch":3,"start_stamp":"2025-10-09T11:00:02.315628+0000","gid":24125,"addr":"192.168.122.101:0/1032475736","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.vbxein","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864108","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"1063f874-5e69-4914-9198-c2cdfb8f2870","zone_name":"default","zonegroup_id":"59510648-2c54-408c-beb4-010e0f01e98d","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":4,"start_stamp":"2025-10-09T11:00:02.811156+0000","gid":24148,"addr":"192.168.122.102:0/330463100","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.klwwrz","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025","kernel_version":"5.14.0-620.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864100","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"1063f874-5e69-4914-9198-c2cdfb8f2870","zone_name":"default","zonegroup_id":"59510648-2c54-408c-beb4-010e0f01e98d","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"3fc4d835-3043-4db4-8f30-6e36e78e0af4":{"message":"Updating nfs.cephfs deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct  9 11:00:50 compute-0 systemd[1]: libpod-6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4.scope: Deactivated successfully.
Oct  9 11:00:50 compute-0 podman[26591]: 2025-10-09 11:00:50.007552401 +0000 UTC m=+0.522677310 container died 6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4 (image=quay.io/ceph/ceph:v19, name=frosty_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3dc45ec7300deb9205616e098b795293882852d16c66f946f9dac1ebbde591-merged.mount: Deactivated successfully.
Oct  9 11:00:50 compute-0 podman[26591]: 2025-10-09 11:00:50.043557964 +0000 UTC m=+0.558682863 container remove 6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4 (image=quay.io/ceph/ceph:v19, name=frosty_wiles, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:50 compute-0 systemd[1]: libpod-conmon-6099221f720188a5ee622152c2fb260559144e6c58683a2f160e2570e576fff4.scope: Deactivated successfully.
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 11:00:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:51 compute-0 python3[26685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.077197758 +0000 UTC m=+0.038300747 container create 170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442 (image=quay.io/ceph/ceph:v19, name=adoring_brown, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 11:00:51 compute-0 systemd[1]: Started libpod-conmon-170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442.scope.
Oct  9 11:00:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3057554ebee84f0a2b0af8a88d7336d1b4bec0161f399f37bb10ef618495e497/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3057554ebee84f0a2b0af8a88d7336d1b4bec0161f399f37bb10ef618495e497/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.142990075 +0000 UTC m=+0.104093094 container init 170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442 (image=quay.io/ceph/ceph:v19, name=adoring_brown, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.148712808 +0000 UTC m=+0.109815797 container start 170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442 (image=quay.io/ceph/ceph:v19, name=adoring_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.153214743 +0000 UTC m=+0.114317732 container attach 170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442 (image=quay.io/ceph/ceph:v19, name=adoring_brown, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.059693607 +0000 UTC m=+0.020796616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:51 compute-0 ceph-mon[4705]: Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg
Oct  9 11:00:51 compute-0 ceph-mon[4705]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 11:00:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 11:00:51 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340127821' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 11:00:51 compute-0 adoring_brown[26702]: 
Oct  9 11:00:51 compute-0 systemd[1]: libpod-170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442.scope: Deactivated successfully.
Oct  9 11:00:51 compute-0 adoring_brown[26702]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.izrudc/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.rtiqvm/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.agiurv/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.cjdyiw","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.vbxein","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.klwwrz","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.534675469 +0000 UTC m=+0.495778468 container died 170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442 (image=quay.io/ceph/ceph:v19, name=adoring_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct  9 11:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3057554ebee84f0a2b0af8a88d7336d1b4bec0161f399f37bb10ef618495e497-merged.mount: Deactivated successfully.
Oct  9 11:00:51 compute-0 podman[26686]: 2025-10-09 11:00:51.571494408 +0000 UTC m=+0.532597397 container remove 170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442 (image=quay.io/ceph/ceph:v19, name=adoring_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 11:00:51 compute-0 systemd[1]: libpod-conmon-170a50f10339117db511f50771daff5c6656c981c4c53405c702c0aa1e8ad442.scope: Deactivated successfully.
Oct  9 11:00:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e9 new map
Oct  9 11:00:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-10-09T11:00:51:777283+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T11:00:30.843558+0000#012modified#0112025-10-09T11:00:49.018608+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24211}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24211 members: 24211#012[mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.yzkqil{-1:24176} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 11:00:51 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] up:standby
Oct  9 11:00:51 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 2 up:standby
Oct  9 11:00:51 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Oct  9 11:00:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  9 11:00:52 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 11:00:52 compute-0 python3[26764]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:52 compute-0 podman[26765]: 2025-10-09 11:00:52.630116022 +0000 UTC m=+0.043498923 container create c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec (image=quay.io/ceph/ceph:v19, name=recursing_black, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 11:00:52 compute-0 systemd[1]: Started libpod-conmon-c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec.scope.
Oct  9 11:00:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7327ff70c5b965fb4940a553bdf76790f2d3d45f972834451353a4d429884361/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7327ff70c5b965fb4940a553bdf76790f2d3d45f972834451353a4d429884361/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:52 compute-0 podman[26765]: 2025-10-09 11:00:52.700526808 +0000 UTC m=+0.113909729 container init c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec (image=quay.io/ceph/ceph:v19, name=recursing_black, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 11:00:52 compute-0 podman[26765]: 2025-10-09 11:00:52.705823787 +0000 UTC m=+0.119206688 container start c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec (image=quay.io/ceph/ceph:v19, name=recursing_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 11:00:52 compute-0 podman[26765]: 2025-10-09 11:00:52.615338499 +0000 UTC m=+0.028721420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:52 compute-0 podman[26765]: 2025-10-09 11:00:52.709421013 +0000 UTC m=+0.122803934 container attach c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec (image=quay.io/ceph/ceph:v19, name=recursing_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:00:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Oct  9 11:00:53 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940324292' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  9 11:00:53 compute-0 recursing_black[26781]: mimic
Oct  9 11:00:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 11:00:53 compute-0 systemd[1]: libpod-c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec.scope: Deactivated successfully.
Oct  9 11:00:53 compute-0 podman[26765]: 2025-10-09 11:00:53.062355696 +0000 UTC m=+0.475738597 container died c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec (image=quay.io/ceph/ceph:v19, name=recursing_black, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7327ff70c5b965fb4940a553bdf76790f2d3d45f972834451353a4d429884361-merged.mount: Deactivated successfully.
Oct  9 11:00:53 compute-0 podman[26765]: 2025-10-09 11:00:53.094069361 +0000 UTC m=+0.507452252 container remove c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec (image=quay.io/ceph/ceph:v19, name=recursing_black, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 11:00:53 compute-0 systemd[1]: libpod-conmon-c8ff075feb80f09ce20fc01b8f50ebc14686d5983a12d765c1a36e07a0b596ec.scope: Deactivated successfully.
Oct  9 11:00:53 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:53 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:53 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg-rgw
Oct  9 11:00:53 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg-rgw
Oct  9 11:00:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 11:00:53 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 11:00:53 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Oct  9 11:00:54 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 11:00:54 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 11:00:54 compute-0 ceph-mon[4705]: Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:54 compute-0 ceph-mon[4705]: Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg-rgw
Oct  9 11:00:54 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 11:00:54 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 11:00:54 compute-0 ceph-mgr[4997]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.mtmthg's ganesha conf is defaulting to empty
Oct  9 11:00:54 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.mtmthg's ganesha conf is defaulting to empty
Oct  9 11:00:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:54 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:54 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.mtmthg on compute-2
Oct  9 11:00:54 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.mtmthg on compute-2
Oct  9 11:00:54 compute-0 python3[26865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:00:54 compute-0 podman[26866]: 2025-10-09 11:00:54.473944085 +0000 UTC m=+0.035147708 container create 7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871 (image=quay.io/ceph/ceph:v19, name=brave_cartwright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 11:00:54 compute-0 systemd[1]: Started libpod-conmon-7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871.scope.
Oct  9 11:00:54 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd32a61ef3fc7ce6607188b5a9fad957be6a633492b620aba34fae9b9e7e1483/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd32a61ef3fc7ce6607188b5a9fad957be6a633492b620aba34fae9b9e7e1483/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:54 compute-0 podman[26866]: 2025-10-09 11:00:54.55186804 +0000 UTC m=+0.113071683 container init 7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871 (image=quay.io/ceph/ceph:v19, name=brave_cartwright, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 11:00:54 compute-0 podman[26866]: 2025-10-09 11:00:54.457532508 +0000 UTC m=+0.018736131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:00:54 compute-0 podman[26866]: 2025-10-09 11:00:54.557008045 +0000 UTC m=+0.118211668 container start 7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871 (image=quay.io/ceph/ceph:v19, name=brave_cartwright, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 11:00:54 compute-0 podman[26866]: 2025-10-09 11:00:54.56058891 +0000 UTC m=+0.121792533 container attach 7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871 (image=quay.io/ceph/ceph:v19, name=brave_cartwright, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893144808' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  9 11:00:55 compute-0 brave_cartwright[26881]: 
Oct  9 11:00:55 compute-0 systemd[1]: libpod-7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871.scope: Deactivated successfully.
Oct  9 11:00:55 compute-0 conmon[26881]: conmon 7bc477463ff38b89a8e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871.scope/container/memory.events
Oct  9 11:00:55 compute-0 brave_cartwright[26881]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Oct  9 11:00:55 compute-0 podman[26866]: 2025-10-09 11:00:55.032895396 +0000 UTC m=+0.594099019 container died 7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871 (image=quay.io/ceph/ceph:v19, name=brave_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 11:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd32a61ef3fc7ce6607188b5a9fad957be6a633492b620aba34fae9b9e7e1483-merged.mount: Deactivated successfully.
Oct  9 11:00:55 compute-0 podman[26866]: 2025-10-09 11:00:55.072045269 +0000 UTC m=+0.633248893 container remove 7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871 (image=quay.io/ceph/ceph:v19, name=brave_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 11:00:55 compute-0 systemd[1]: libpod-conmon-7bc477463ff38b89a8e8e07ca8c2488b3b7fa5a7ed4b304e6bf862af74ea6871.scope: Deactivated successfully.
Oct  9 11:00:55 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 11:00:55 compute-0 ceph-mon[4705]: Bind address in nfs.cephfs.1.0.compute-2.mtmthg's ganesha conf is defaulting to empty
Oct  9 11:00:55 compute-0 ceph-mon[4705]: Deploying daemon nfs.cephfs.1.0.compute-2.mtmthg on compute-2
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:55 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.akqbal
Oct  9 11:00:55 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.akqbal
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 11:00:55 compute-0 ceph-mgr[4997]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 11:00:55 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:55 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Oct  9 11:00:55 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 11:00:55 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.akqbal-rgw
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.akqbal-rgw
Oct  9 11:00:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 11:00:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 11:00:56 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.akqbal's ganesha conf is defaulting to empty
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.akqbal's ganesha conf is defaulting to empty
Oct  9 11:00:56 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:00:56 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.akqbal on compute-0
Oct  9 11:00:56 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.akqbal on compute-0
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.585878922 +0000 UTC m=+0.041296164 container create 99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:00:56 compute-0 systemd[1]: Started libpod-conmon-99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6.scope.
Oct  9 11:00:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.663318723 +0000 UTC m=+0.118735985 container init 99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.569622941 +0000 UTC m=+0.025040183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.668591981 +0000 UTC m=+0.124009223 container start 99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:56 compute-0 zen_lumiere[27058]: 167 167
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.672422353 +0000 UTC m=+0.127839605 container attach 99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 11:00:56 compute-0 systemd[1]: libpod-99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6.scope: Deactivated successfully.
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.672976981 +0000 UTC m=+0.128394233 container died 99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-45010a24bbd73ee55d419cd161e4ea71e5e095a40f6edca3f623058f912fe299-merged.mount: Deactivated successfully.
Oct  9 11:00:56 compute-0 podman[27042]: 2025-10-09 11:00:56.713862751 +0000 UTC m=+0.169279993 container remove 99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 11:00:56 compute-0 systemd[1]: libpod-conmon-99902edccc32abfb882f5389f78548edfb0285408ce79bbea933cf5cdc5f02a6.scope: Deactivated successfully.
Oct  9 11:00:56 compute-0 systemd[1]: Reloading.
Oct  9 11:00:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:00:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:56 compute-0 ceph-mon[4705]: Creating key for client.nfs.cephfs.2.0.compute-0.akqbal
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 11:00:56 compute-0 ceph-mon[4705]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 11:00:56 compute-0 ceph-mon[4705]: Rados config object exists: conf-nfs.cephfs
Oct  9 11:00:56 compute-0 ceph-mon[4705]: Creating key for client.nfs.cephfs.2.0.compute-0.akqbal-rgw
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 11:00:56 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 11:00:56 compute-0 ceph-mon[4705]: Bind address in nfs.cephfs.2.0.compute-0.akqbal's ganesha conf is defaulting to empty
Oct  9 11:00:56 compute-0 ceph-mon[4705]: Deploying daemon nfs.cephfs.2.0.compute-0.akqbal on compute-0
Oct  9 11:00:57 compute-0 systemd[1]: Reloading.
Oct  9 11:00:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:00:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:00:57 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.akqbal for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:00:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:00:57 compute-0 podman[27201]: 2025-10-09 11:00:57.578064308 +0000 UTC m=+0.039072592 container create ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ff81faea2406a9385d0a0590e356554cd463440fe52b10e60c1e763193cf4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ff81faea2406a9385d0a0590e356554cd463440fe52b10e60c1e763193cf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ff81faea2406a9385d0a0590e356554cd463440fe52b10e60c1e763193cf4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79ff81faea2406a9385d0a0590e356554cd463440fe52b10e60c1e763193cf4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.akqbal-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 11:00:57 compute-0 podman[27201]: 2025-10-09 11:00:57.63277255 +0000 UTC m=+0.093780864 container init ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:57 compute-0 podman[27201]: 2025-10-09 11:00:57.637962746 +0000 UTC m=+0.098971030 container start ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:00:57 compute-0 bash[27201]: ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b
Oct  9 11:00:57 compute-0 podman[27201]: 2025-10-09 11:00:57.560398882 +0000 UTC m=+0.021407186 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:00:57 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.akqbal for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 11:00:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 11:00:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:00:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 11:00:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:57 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 3fc4d835-3043-4db4-8f30-6e36e78e0af4 (Updating nfs.cephfs deployment (+3 -> 3))
Oct  9 11:00:57 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 3fc4d835-3043-4db4-8f30-6e36e78e0af4 (Updating nfs.cephfs deployment (+3 -> 3)) in 10 seconds
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 11:00:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 11:00:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:57 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev abaa8dc9-23fa-4915-89dd-c008980c3b1d (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  9 11:00:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 11:00:57 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 11:00:57 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.thyuoj on compute-1
Oct  9 11:00:57 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.thyuoj on compute-1
Oct  9 11:00:57 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 8 op/s
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Oct  9 11:00:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:00:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 11:00:58 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:58 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:58 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:58 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:58 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:58 compute-0 ceph-mon[4705]: Deploying daemon haproxy.nfs.cephfs.compute-1.thyuoj on compute-1
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 14 completed events
Oct  9 11:00:59 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:00:59 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:00:59 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:01:01 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:01 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.6 KiB/s wr, 9 op/s
Oct  9 11:01:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:01:03 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:01:03 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:03 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct  9 11:01:03 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.zhclxd on compute-0
Oct  9 11:01:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.zhclxd on compute-0
Oct  9 11:01:04 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:04 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:04 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:04 compute-0 ceph-mon[4705]: Deploying daemon haproxy.nfs.cephfs.compute-0.zhclxd on compute-0
Oct  9 11:01:04 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:04 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:05 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.526910608 +0000 UTC m=+2.145807297 container create 1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222 (image=quay.io/ceph/haproxy:2.3, name=laughing_bartik)
Oct  9 11:01:06 compute-0 systemd[1]: Started libpod-conmon-1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222.scope.
Oct  9 11:01:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.509064317 +0000 UTC m=+2.127961026 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.60625248 +0000 UTC m=+2.225149189 container init 1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222 (image=quay.io/ceph/haproxy:2.3, name=laughing_bartik)
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.615132244 +0000 UTC m=+2.234028933 container start 1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222 (image=quay.io/ceph/haproxy:2.3, name=laughing_bartik)
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.619137252 +0000 UTC m=+2.238033961 container attach 1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222 (image=quay.io/ceph/haproxy:2.3, name=laughing_bartik)
Oct  9 11:01:06 compute-0 laughing_bartik[27491]: 0 0
Oct  9 11:01:06 compute-0 systemd[1]: libpod-1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222.scope: Deactivated successfully.
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.6244038 +0000 UTC m=+2.243300489 container died 1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222 (image=quay.io/ceph/haproxy:2.3, name=laughing_bartik)
Oct  9 11:01:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a8bf7d1e32ad7cf9248a6fcc6dbad6702fed0ed5164f1292d7cf31e29a9aca2-merged.mount: Deactivated successfully.
Oct  9 11:01:06 compute-0 podman[27376]: 2025-10-09 11:01:06.663190283 +0000 UTC m=+2.282086972 container remove 1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222 (image=quay.io/ceph/haproxy:2.3, name=laughing_bartik)
Oct  9 11:01:06 compute-0 systemd[1]: libpod-conmon-1dfb79823d0c7f6fff6417bf5931eb97ea84b3f4b48ee79e8bdc18ab973a8222.scope: Deactivated successfully.
Oct  9 11:01:06 compute-0 systemd[1]: Reloading.
Oct  9 11:01:06 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:06 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:06 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:06 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:07 compute-0 systemd[1]: Reloading.
Oct  9 11:01:07 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:07 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:07 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.zhclxd for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:07 compute-0 podman[27640]: 2025-10-09 11:01:07.442003983 +0000 UTC m=+0.034661380 container create 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:01:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3713577a475570ad14e0bb45053c251e4afac445916708da5b4127ff974ce7/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:07 compute-0 podman[27640]: 2025-10-09 11:01:07.496782768 +0000 UTC m=+0.089440165 container init 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:01:07 compute-0 podman[27640]: 2025-10-09 11:01:07.501802948 +0000 UTC m=+0.094460345 container start 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:01:07 compute-0 bash[27640]: 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f
Oct  9 11:01:07 compute-0 podman[27640]: 2025-10-09 11:01:07.426692533 +0000 UTC m=+0.019349950 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 11:01:07 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd[27655]: [NOTICE] 281/110107 (2) : New worker #1 (4) forked
Oct  9 11:01:07 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.zhclxd for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:07 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct  9 11:01:07 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:08 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.xqfbnl on compute-2
Oct  9 11:01:08 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.xqfbnl on compute-2
Oct  9 11:01:08 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:08 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:08 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:08 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:08 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:08 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:08 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:08 compute-0 ceph-mon[4705]: Deploying daemon haproxy.nfs.cephfs.compute-2.xqfbnl on compute-2
Oct  9 11:01:09 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct  9 11:01:10 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:10 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:10 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:10 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30001dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:11 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct  9 11:01:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:01:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:01:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Oct  9 11:01:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.wkoquj on compute-0
Oct  9 11:01:12 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.wkoquj on compute-0
Oct  9 11:01:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:12 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:12 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:12 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:12 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200021c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:13 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:13 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:13 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:13 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:13 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:13 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:13 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:13 compute-0 ceph-mon[4705]: Deploying daemon keepalived.nfs.cephfs.compute-0.wkoquj on compute-0
Oct  9 11:01:13 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:13 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:13 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  9 11:01:14 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:14 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30001dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:14 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:14 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:15 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd[27655]: [WARNING] 281/110115 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 11:01:15 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:15 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200021c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.821880449 +0000 UTC m=+2.947826781 container create bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe (image=quay.io/ceph/keepalived:2.2.4, name=confident_davinci, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9)
Oct  9 11:01:15 compute-0 systemd[1341]: Created slice User Background Tasks Slice.
Oct  9 11:01:15 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  9 11:01:15 compute-0 systemd[1341]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 11:01:15 compute-0 systemd[1]: Started libpod-conmon-bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe.scope.
Oct  9 11:01:15 compute-0 systemd[1341]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 11:01:15 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.877247122 +0000 UTC m=+3.003193474 container init bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe (image=quay.io/ceph/keepalived:2.2.4, name=confident_davinci, release=1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.88684752 +0000 UTC m=+3.012793872 container start bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe (image=quay.io/ceph/keepalived:2.2.4, name=confident_davinci, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.890368833 +0000 UTC m=+3.016315185 container attach bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe (image=quay.io/ceph/keepalived:2.2.4, name=confident_davinci, com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct  9 11:01:15 compute-0 confident_davinci[27857]: 0 0
Oct  9 11:01:15 compute-0 systemd[1]: libpod-bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe.scope: Deactivated successfully.
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.894742662 +0000 UTC m=+3.020688994 container died bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe (image=quay.io/ceph/keepalived:2.2.4, name=confident_davinci, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, name=keepalived)
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.800758693 +0000 UTC m=+2.926705055 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 11:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d6403fb190468ea3245e669baffb7a8bd42d5bd770ed72a40b2d389e74d413d-merged.mount: Deactivated successfully.
Oct  9 11:01:15 compute-0 podman[27761]: 2025-10-09 11:01:15.92747344 +0000 UTC m=+3.053419772 container remove bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe (image=quay.io/ceph/keepalived:2.2.4, name=confident_davinci, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived)
Oct  9 11:01:15 compute-0 systemd[1]: libpod-conmon-bdfc4a723e76274058dea19ee66faa8665dac2af75de6f4021c67bb34747dafe.scope: Deactivated successfully.
Oct  9 11:01:15 compute-0 systemd[1]: Reloading.
Oct  9 11:01:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:16 compute-0 systemd[1]: Reloading.
Oct  9 11:01:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:16 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.wkoquj for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:16 compute-0 podman[28002]: 2025-10-09 11:01:16.71061099 +0000 UTC m=+0.035581011 container create f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived)
Oct  9 11:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37e0fdf236c5f4c2a3665cb6832f60fc56611a91a2fb6aadf11271ad907d28f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:16 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:16 compute-0 podman[28002]: 2025-10-09 11:01:16.760626762 +0000 UTC m=+0.085596803 container init f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, name=keepalived, vcs-type=git, io.buildah.version=1.28.2, version=2.2.4)
Oct  9 11:01:16 compute-0 podman[28002]: 2025-10-09 11:01:16.765424565 +0000 UTC m=+0.090394586 container start f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, vendor=Red Hat, Inc.)
Oct  9 11:01:16 compute-0 bash[28002]: f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01
Oct  9 11:01:16 compute-0 podman[28002]: 2025-10-09 11:01:16.696018412 +0000 UTC m=+0.020988453 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 11:01:16 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.wkoquj for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: Starting VRRP child process, pid=4
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: Startup complete
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:16 2025: VRRP_Script(check_backend) succeeded
Oct  9 11:01:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:16 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a080016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:16 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.dxpkeo on compute-2
Oct  9 11:01:16 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.dxpkeo on compute-2
Oct  9 11:01:17 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:17 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:17 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:17 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:17 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30001dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:17 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  9 11:01:18 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:18 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:18 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:18 compute-0 ceph-mon[4705]: Deploying daemon keepalived.nfs.cephfs.compute-2.dxpkeo on compute-2
Oct  9 11:01:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:18 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200021c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:18 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:19 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:19 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:19 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  9 11:01:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:20 2025: (VI_0) Entering MASTER STATE
Oct  9 11:01:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:20 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a300091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:20 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200021c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:01:20 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:01:20 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:20 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.ymbnot on compute-1
Oct  9 11:01:20 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.ymbnot on compute-1
Oct  9 11:01:21 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:21 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:21 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct  9 11:01:22 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:22 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:22 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:22 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 11:01:22 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:22 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:22 compute-0 ceph-mon[4705]: Deploying daemon keepalived.nfs.cephfs.compute-1.ymbnot on compute-1
Oct  9 11:01:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:22 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:22 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a300091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:23 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200021c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:23 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  9 11:01:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:24 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Oct  9 11:01:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:24 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 11:01:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:24 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:24 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:25 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:25 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  9 11:01:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:01:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:01:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:26 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200021c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:26 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev abaa8dc9-23fa-4915-89dd-c008980c3b1d (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  9 11:01:26 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event abaa8dc9-23fa-4915-89dd-c008980c3b1d (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 29 seconds
Oct  9 11:01:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 11:01:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:26 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev eee4459e-c471-4cc3-91c6-2e477b173487 (Updating alertmanager deployment (+1 -> 1))
Oct  9 11:01:26 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 11:01:26 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 11:01:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:26 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:27 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:27 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 11:01:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:27 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 11:01:27 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:27 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:27 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:27 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:27 compute-0 ceph-mon[4705]: Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 11:01:27 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.592351147 +0000 UTC m=+1.363840076 volume create b21e5efeef8382ca766779c2e8b7f082320a62c69411ab5c31daf4ac7cf868db
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.603782563 +0000 UTC m=+1.375271502 container create 516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_margulis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 systemd[1]: Started libpod-conmon-516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab.scope.
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.576664504 +0000 UTC m=+1.348153453 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 11:01:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4740f934558b807139b1ee209e53d27cd75521994c7e97cd9129a8e14370ad2/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.70081932 +0000 UTC m=+1.472308269 container init 516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_margulis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.709920202 +0000 UTC m=+1.481409131 container start 516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_margulis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.714401815 +0000 UTC m=+1.485890754 container attach 516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_margulis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 youthful_margulis[28256]: 65534 65534
Oct  9 11:01:28 compute-0 systemd[1]: libpod-516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab.scope: Deactivated successfully.
Oct  9 11:01:28 compute-0 conmon[28256]: conmon 516ea689eb0a62493cb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab.scope/container/memory.events
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.717233576 +0000 UTC m=+1.488722515 container died 516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_margulis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4740f934558b807139b1ee209e53d27cd75521994c7e97cd9129a8e14370ad2-merged.mount: Deactivated successfully.
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.7510909 +0000 UTC m=+1.522579829 container remove 516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_margulis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 podman[28122]: 2025-10-09 11:01:28.755361196 +0000 UTC m=+1.526850146 volume remove b21e5efeef8382ca766779c2e8b7f082320a62c69411ab5c31daf4ac7cf868db
Oct  9 11:01:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:28 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:28 compute-0 systemd[1]: libpod-conmon-516ea689eb0a62493cb63f8d13044e4c207299c362be873329f90c374955c7ab.scope: Deactivated successfully.
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.814739638 +0000 UTC m=+0.038972129 volume create ef4cbdd030120eac3c5fff346b4e377390231141a3a42080f495b706208416c4
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.824220312 +0000 UTC m=+0.048452803 container create 717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 systemd[1]: Started libpod-conmon-717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27.scope.
Oct  9 11:01:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:28 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02411fcecbfb1bce372fb92946bce9bd0116cc4972eea357ad4220e6a0b60945/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.881723783 +0000 UTC m=+0.105956284 container init 717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.890520855 +0000 UTC m=+0.114753346 container start 717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 vibrant_diffie[28291]: 65534 65534
Oct  9 11:01:28 compute-0 systemd[1]: libpod-717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27.scope: Deactivated successfully.
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.800341107 +0000 UTC m=+0.024573618 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.901887309 +0000 UTC m=+0.126119820 container attach 717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.902122157 +0000 UTC m=+0.126354658 container died 717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-02411fcecbfb1bce372fb92946bce9bd0116cc4972eea357ad4220e6a0b60945-merged.mount: Deactivated successfully.
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.958340067 +0000 UTC m=+0.182572558 container remove 717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vibrant_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:28 compute-0 podman[28273]: 2025-10-09 11:01:28.961395345 +0000 UTC m=+0.185627836 volume remove ef4cbdd030120eac3c5fff346b4e377390231141a3a42080f495b706208416c4
Oct  9 11:01:28 compute-0 systemd[1]: libpod-conmon-717b3cab586c5449c34f81e805802f12aa6055d6052d5567fc90f51df4bb7c27.scope: Deactivated successfully.
Oct  9 11:01:29 compute-0 systemd[1]: Reloading.
Oct  9 11:01:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:29 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:29 compute-0 systemd[1]: Reloading.
Oct  9 11:01:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:29 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:29 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:29 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:29 compute-0 podman[28433]: 2025-10-09 11:01:29.7818802 +0000 UTC m=+0.039241988 volume create a0b6c50a2a31474484748a6ee8545de38e0ba2c0b5d2f916a441fbf3b3979805
Oct  9 11:01:29 compute-0 podman[28433]: 2025-10-09 11:01:29.791527409 +0000 UTC m=+0.048889197 container create e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23582dab11349f643784c785110f40c91f53ca635a975b97db00fd6edeab18b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_11:01:29
Oct  9 11:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23582dab11349f643784c785110f40c91f53ca635a975b97db00fd6edeab18b/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [balancer INFO root] do_upmap
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', '.nfs']
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:29 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Oct  9 11:01:29 compute-0 podman[28433]: 2025-10-09 11:01:29.850298101 +0000 UTC m=+0.107659899 container init e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:29 compute-0 podman[28433]: 2025-10-09 11:01:29.854854027 +0000 UTC m=+0.112215815 container start e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:29 compute-0 bash[28433]: e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e
Oct  9 11:01:29 compute-0 podman[28433]: 2025-10-09 11:01:29.767491909 +0000 UTC m=+0.024853727 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 11:01:29 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.881Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.881Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Oct  9 11:01:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Oct  9 11:01:29 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.890Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=172.19.0.101 port=9094
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.892Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:01:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 15 completed events
Oct  9 11:01:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:01:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.938Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.938Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.942Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct  9 11:01:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:29.942Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:01:29 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  9 11:01:30 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev a63dfcdf-1e66-4d0e-be48-9c8225a7dc64 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev eee4459e-c471-4cc3-91c6-2e477b173487 (Updating alertmanager deployment (+1 -> 1))
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event eee4459e-c471-4cc3-91c6-2e477b173487 (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev f48a492e-7878-4dc7-8154-44ce02579bd4 (Updating grafana deployment (+1 -> 1))
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 11:01:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Oct  9 11:01:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Oct  9 11:01:30 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Oct  9 11:01:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:30 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 11:01:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:30 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:30 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  9 11:01:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  9 11:01:31 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  9 11:01:31 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 598a3168-59d3-4ef2-a83d-ed5187409b58 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  9 11:01:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Oct  9 11:01:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: Regenerating cephadm self-signed grafana TLS certificates
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 11:01:31 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:31 compute-0 ceph-mon[4705]: Deploying daemon grafana.compute-0 on compute-0
Oct  9 11:01:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:31 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:31 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct  9 11:01:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 11:01:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 11:01:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:31.893Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000987989s
Oct  9 11:01:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  9 11:01:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  9 11:01:32 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  9 11:01:32 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev e91a9a83-a982-40f1-b5f6-acc1a98e4c1d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 55 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=55 pruub=10.571846962s) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 35'5 mlcod 35'5 active pruub 175.366165161s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:32 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 55 pg[9.0( v 42'1020 (0'0,42'1020] local-lis/les=36/37 n=178 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=55 pruub=12.605444908s) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 42'1019 mlcod 42'1019 active pruub 177.399948120s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:32 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 55 pg[8.0( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=55 pruub=10.571846962s) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 35'5 mlcod 0'0 unknown pruub 175.366165161s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55e466111440) operator()   moving buffer(0x55e4661d3ba8 space 0x55e466227e20 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55e466111440) operator()   moving buffer(0x55e4661e0488 space 0x55e4661b7a10 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55e466111440) operator()   moving buffer(0x55e466211248 space 0x55e4661b6aa0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Oct  9 11:01:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:32 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 55 pg[9.0( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=55 pruub=12.605444908s) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 42'1019 mlcod 0'0 unknown pruub 177.399948120s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d9108 space 0x55e466226830 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661e0d48 space 0x55e46621c1b0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e466210848 space 0x55e4661b7c80 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661e0348 space 0x55e4660c9d50 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661e1248 space 0x55e46621cf80 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d97e8 space 0x55e46623bd50 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661cbc48 space 0x55e46621d120 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661e0848 space 0x55e46565f1f0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661f1ba8 space 0x55e4661b6010 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661b1ec8 space 0x55e46565f120 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661ab4c8 space 0x55e4660d7ae0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661b14c8 space 0x55e46619f6d0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661f1e28 space 0x55e46565f050 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e465f73428 space 0x55e466226010 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661f0c08 space 0x55e46565ef80 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d3108 space 0x55e46621cb70 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661e08e8 space 0x55e4661b6de0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e466211108 space 0x55e46565f2c0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d2ac8 space 0x55e4661b6eb0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661f0028 space 0x55e4660d7bb0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e466210348 space 0x55e46565fc80 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661c1568 space 0x55e4661b6c40 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d22a8 space 0x55e465722350 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d9ec8 space 0x55e46621caa0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661b0ac8 space 0x55e46619e010 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661e1888 space 0x55e46621d050 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661abce8 space 0x55e4661b6b70 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661b0208 space 0x55e46619e0e0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661d3248 space 0x55e46621ceb0 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-osd[12987]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55e464ea0fc0) operator()   moving buffer(0x55e4661ca528 space 0x55e46565f390 0x0~1000 clean)
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:32 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:32 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd[27655]: [WARNING] 281/110132 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 11:01:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:32 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:32 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:32 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:32 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  9 11:01:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  9 11:01:33 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  9 11:01:33 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 8d3fd9c1-28d8-471d-9523-3f81bb4782c0 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  9 11:01:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Oct  9 11:01:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.14( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.14( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.15( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.17( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.15( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.16( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.16( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.17( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.11( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.10( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.10( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.11( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.3( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.2( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.2( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.3( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.e( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.9( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.8( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.8( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.9( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.b( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.f( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.c( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.d( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.a( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.6( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.7( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.7( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.4( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.5( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.5( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.4( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1a( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1b( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.18( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.19( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.19( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.18( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1e( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1f( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1c( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1d( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.12( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.13( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.13( v 42'1020 lc 0'0 (0'0,42'1020] local-lis/les=36/37 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.12( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.14( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.17( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.11( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.10( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.2( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.8( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.c( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.0( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 42'1019 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 35'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.7( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.4( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.5( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.19( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1c( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.1e( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.13( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.12( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=36/36 les/c/f=37/37/0 sis=55) [0] r=0 lpr=55 pi=[36,55)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 56 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=34/34 les/c/f=35/35/0 sis=55) [0] r=0 lpr=55 pi=[34,55)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:33 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:33 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 11:01:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:33 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:33 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  9 11:01:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 11:01:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 11:01:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:34 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  9 11:01:34 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  9 11:01:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  9 11:01:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  9 11:01:34 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev f01be49e-d4ec-455d-a30d-34b57d93077a (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct  9 11:01:34 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=57 pruub=14.872456551s) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active pruub 181.762557983s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev a63dfcdf-1e66-4d0e-be48-9c8225a7dc64 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event a63dfcdf-1e66-4d0e-be48-9c8225a7dc64 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 598a3168-59d3-4ef2-a83d-ed5187409b58 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 598a3168-59d3-4ef2-a83d-ed5187409b58 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev e91a9a83-a982-40f1-b5f6-acc1a98e4c1d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event e91a9a83-a982-40f1-b5f6-acc1a98e4c1d (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 8d3fd9c1-28d8-471d-9523-3f81bb4782c0 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 8d3fd9c1-28d8-471d-9523-3f81bb4782c0 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev f01be49e-d4ec-455d-a30d-34b57d93077a (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct  9 11:01:34 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event f01be49e-d4ec-455d-a30d-34b57d93077a (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Oct  9 11:01:34 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=57 pruub=14.872456551s) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown pruub 181.762557983s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:34 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:34 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:34 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:35 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  9 11:01:35 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  9 11:01:35 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 21 completed events
Oct  9 11:01:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:01:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  9 11:01:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:35 compute-0 ceph-mgr[4997]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Oct  9 11:01:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  9 11:01:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.15( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.c( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.b( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.9( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.d( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.2( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.18( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1f( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.11( empty local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.15( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.0( empty local-lis/les=57/58 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.c( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.b( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.9( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.d( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.2( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.18( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1f( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 58 pg[11.11( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [0] r=0 lpr=57 pi=[40,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:35 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  9 11:01:35 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:35 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:35 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:35 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:35 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 11:01:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:35 compute-0 podman[28565]: 2025-10-09 11:01:35.969056687 +0000 UTC m=+5.096257153 container create 30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f (image=quay.io/ceph/grafana:10.4.0, name=determined_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:35 compute-0 systemd[1]: Started libpod-conmon-30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f.scope.
Oct  9 11:01:36 compute-0 podman[28565]: 2025-10-09 11:01:35.953077225 +0000 UTC m=+5.080277711 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 11:01:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:36 compute-0 podman[28565]: 2025-10-09 11:01:36.024508283 +0000 UTC m=+5.151708809 container init 30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f (image=quay.io/ceph/grafana:10.4.0, name=determined_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 podman[28565]: 2025-10-09 11:01:36.030751923 +0000 UTC m=+5.157952399 container start 30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f (image=quay.io/ceph/grafana:10.4.0, name=determined_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 determined_hodgkin[28784]: 472 0
Oct  9 11:01:36 compute-0 systemd[1]: libpod-30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f.scope: Deactivated successfully.
Oct  9 11:01:36 compute-0 podman[28565]: 2025-10-09 11:01:36.034705109 +0000 UTC m=+5.161905575 container attach 30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f (image=quay.io/ceph/grafana:10.4.0, name=determined_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 podman[28565]: 2025-10-09 11:01:36.035036429 +0000 UTC m=+5.162236895 container died 30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f (image=quay.io/ceph/grafana:10.4.0, name=determined_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-84cc9c8aa215b20ddbca86d572cc293df7c20472a29fa889eff83a791abc7506-merged.mount: Deactivated successfully.
Oct  9 11:01:36 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  9 11:01:36 compute-0 podman[28565]: 2025-10-09 11:01:36.074285926 +0000 UTC m=+5.201486412 container remove 30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f (image=quay.io/ceph/grafana:10.4.0, name=determined_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  9 11:01:36 compute-0 systemd[1]: libpod-conmon-30bd301f5e33712e0a90227efc95cff6569d877517c283e66b4871c67ce9936f.scope: Deactivated successfully.
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.168807893 +0000 UTC m=+0.074475616 container create fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_driscoll, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 systemd[1]: Started libpod-conmon-fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413.scope.
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.117432629 +0000 UTC m=+0.023100372 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 11:01:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.236314615 +0000 UTC m=+0.141982358 container init fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_driscoll, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.241418428 +0000 UTC m=+0.147086151 container start fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_driscoll, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 mystifying_driscoll[28817]: 472 0
Oct  9 11:01:36 compute-0 systemd[1]: libpod-fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413.scope: Deactivated successfully.
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.244544008 +0000 UTC m=+0.150211731 container attach fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_driscoll, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.244727794 +0000 UTC m=+0.150395517 container died fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_driscoll, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  9 11:01:36 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  9 11:01:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  9 11:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b49507cfd80590dff88f229b110bfec24a79fc4c3edb1ab4431a92cc9edd51a-merged.mount: Deactivated successfully.
Oct  9 11:01:36 compute-0 podman[28801]: 2025-10-09 11:01:36.279323653 +0000 UTC m=+0.184991376 container remove fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413 (image=quay.io/ceph/grafana:10.4.0, name=mystifying_driscoll, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:36 compute-0 systemd[1]: libpod-conmon-fde752ebfa8f80dea1da8f37578538607d580bcb823627160eab87c646486413.scope: Deactivated successfully.
Oct  9 11:01:36 compute-0 systemd[1]: Reloading.
Oct  9 11:01:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:36 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 11:01:36 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:36 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:36 compute-0 systemd[1]: Reloading.
Oct  9 11:01:36 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:36 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:36 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:36 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:36 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:36 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:36 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:37 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Oct  9 11:01:37 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Oct  9 11:01:37 compute-0 podman[28960]: 2025-10-09 11:01:37.211501574 +0000 UTC m=+0.042147850 container create 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  9 11:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a100980b829e7f67248f9ce59f3474c36f20933ce21bfd785c4d09adec0a6/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a100980b829e7f67248f9ce59f3474c36f20933ce21bfd785c4d09adec0a6/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a100980b829e7f67248f9ce59f3474c36f20933ce21bfd785c4d09adec0a6/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a100980b829e7f67248f9ce59f3474c36f20933ce21bfd785c4d09adec0a6/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79a100980b829e7f67248f9ce59f3474c36f20933ce21bfd785c4d09adec0a6/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  9 11:01:37 compute-0 podman[28960]: 2025-10-09 11:01:37.271319849 +0000 UTC m=+0.101966145 container init 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  9 11:01:37 compute-0 podman[28960]: 2025-10-09 11:01:37.276130144 +0000 UTC m=+0.106776420 container start 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:01:37 compute-0 bash[28960]: 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2
Oct  9 11:01:37 compute-0 podman[28960]: 2025-10-09 11:01:37.193088394 +0000 UTC m=+0.023734690 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 11:01:37 compute-0 systemd[1]: Started Ceph grafana.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev f48a492e-7878-4dc7-8154-44ce02579bd4 (Updating grafana deployment (+1 -> 1))
Oct  9 11:01:37 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event f48a492e-7878-4dc7-8154-44ce02579bd4 (Updating grafana deployment (+1 -> 1)) in 7 seconds
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev eb3a3053-b88d-42a2-9b3c-a1ce278ba5f8 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Oct  9 11:01:37 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.kuntxb on compute-0
Oct  9 11:01:37 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.kuntxb on compute-0
Oct  9 11:01:37 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443048269Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-09T11:01:37Z
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443326718Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443341599Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443345969Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443349479Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443354549Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443358309Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443361749Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.44336882Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.44337259Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.44337584Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.44337914Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.4433827Z level=info msg=Target target=[all]
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.44338938Z level=info msg="Path Home" path=/usr/share/grafana
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443393Z level=info msg="Path Data" path=/var/lib/grafana
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.44339618Z level=info msg="Path Logs" path=/var/log/grafana
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443400341Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443403851Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=settings t=2025-10-09T11:01:37.443407871Z level=info msg="App mode production"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=sqlstore t=2025-10-09T11:01:37.44369658Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=sqlstore t=2025-10-09T11:01:37.443716431Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.444395693Z level=info msg="Starting DB migrations"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.445620642Z level=info msg="Executing migration" id="create migration_log table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.44681276Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.191809ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.449390663Z level=info msg="Executing migration" id="create user table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.450139477Z level=info msg="Migration successfully executed" id="create user table" duration=749.015µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.451770789Z level=info msg="Executing migration" id="add unique index user.login"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.452482502Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=711.003µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.454688402Z level=info msg="Executing migration" id="add unique index user.email"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.455359674Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=670.542µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.458086111Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.458816135Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=720.393µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.460509839Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.46117541Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=666.851µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.462950317Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.465011082Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.060485ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.467409839Z level=info msg="Executing migration" id="create user table v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.46805669Z level=info msg="Migration successfully executed" id="create user table v2" duration=646.531µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.469422404Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.470063944Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=641.48µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.472597776Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.473179024Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=580.788µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.475412376Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.475791497Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=376.301µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.47741421Z level=info msg="Executing migration" id="Drop old table user_v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.477968468Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=553.168µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.479755895Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.480906702Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.149547ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.482975108Z level=info msg="Executing migration" id="Update user table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.483007919Z level=info msg="Migration successfully executed" id="Update user table charset" duration=85.303µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.486398478Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.487467202Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.064463ms
Oct  9 11:01:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.48927759Z level=info msg="Executing migration" id="Add missing user data"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.489461046Z level=info msg="Migration successfully executed" id="Add missing user data" duration=183.766µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.491133139Z level=info msg="Executing migration" id="Add is_disabled column to user"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.492001087Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=867.858µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.494130785Z level=info msg="Executing migration" id="Add index user.login/user.email"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.494979053Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=848.538µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.497682499Z level=info msg="Executing migration" id="Add is_service_account column to user"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.498774394Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.094695ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.500401686Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.506749539Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.347223ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.508293619Z level=info msg="Executing migration" id="Add uid column to user"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.50929407Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=999.981µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.511696368Z level=info msg="Executing migration" id="Update uid column values for users"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.511862523Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=166.715µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.514327562Z level=info msg="Executing migration" id="Add unique index user_uid"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.514872929Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=545.397µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.518096952Z level=info msg="Executing migration" id="create temp user table v1-7"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.518686351Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=589.409µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.520416097Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.521230663Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=815.426µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.524094635Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.524618121Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=523.466µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.526335796Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.526883864Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=547.858µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.528345401Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.528901768Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=556.017µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.531367318Z level=info msg="Executing migration" id="Update temp_user table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.531394369Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.821µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.533912319Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.534483117Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=571.198µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.536316626Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.536899285Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=582.289µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.538389882Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.538956061Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=565.699µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.541839983Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.542387841Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=547.498µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd[27655]: [WARNING] 281/110137 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.545138439Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.547555936Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.417307ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.549489478Z level=info msg="Executing migration" id="create temp_user v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.550115488Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=625.7µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.552722412Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.5533049Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=582.309µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.555686887Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.556322127Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=634.891µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.558143545Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.558702073Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=558.108µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.561065689Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.561665258Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=599.469µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.564560901Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.564892542Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=332.361µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.566867135Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.56734551Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=478.116µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.568666502Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.569115196Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=445.844µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.572496415Z level=info msg="Executing migration" id="create star table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.573029212Z level=info msg="Migration successfully executed" id="create star table" duration=535.037µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.574330613Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.574893601Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=562.738µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.576525003Z level=info msg="Executing migration" id="create org table v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.577192175Z level=info msg="Migration successfully executed" id="create org table v1" duration=666.652µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.579058365Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.579595022Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=535.997µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.581512534Z level=info msg="Executing migration" id="create org_user table v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.582103872Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=590.228µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.586740851Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.587510686Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=769.525µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.589283322Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.590000276Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=717.134µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.591828974Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.592776075Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=947.29µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.594764118Z level=info msg="Executing migration" id="Update org table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.594792339Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.971µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.596538244Z level=info msg="Executing migration" id="Update org_user table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.596571685Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=34.291µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.598504508Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.598716105Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=211.897µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.600998168Z level=info msg="Executing migration" id="create dashboard table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.602064821Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.066833ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.604010224Z level=info msg="Executing migration" id="add index dashboard.account_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.605203902Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.194238ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.607059682Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.607724063Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=666.79µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.610263944Z level=info msg="Executing migration" id="create dashboard_tag table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.610836552Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=572.528µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.614357965Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.615045798Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=688.933µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.618127706Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:37 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.618712164Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=584.228µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.620275545Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.624751498Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.473893ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.62733261Z level=info msg="Executing migration" id="create dashboard v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.628090215Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=756.065µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.629892532Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.630475061Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=582.829µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.633497218Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.634141289Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=643.911µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.635863104Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.636175754Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=312.72µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.63856902Z level=info msg="Executing migration" id="drop table dashboard_v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.639352456Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=785.096µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.640628147Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.640674598Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=47.071µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.64197182Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.643344984Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.372915ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.644749018Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.646099202Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.350074ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.647433844Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.648758497Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.324383ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.650457721Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.651135373Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=677.262µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.652783586Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.654135169Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.351443ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.655696779Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.656300459Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=603.469µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.658331033Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.658963624Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=632.911µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.661171934Z level=info msg="Executing migration" id="Update dashboard table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.661194595Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=23.271µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.662957242Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.662981803Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=25.261µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.664498491Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.666336239Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.835468ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.66788862Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.669400988Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.512238ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.670980229Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.672417074Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.436845ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.673903343Z level=info msg="Executing migration" id="Add column uid in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.675283186Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.379624ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.676733383Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.676900288Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=167.105µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.678541601Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.679228773Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=687.592µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.681264938Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.681840517Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=575.528µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.683232291Z level=info msg="Executing migration" id="Update dashboard title length"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.683250021Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=18.281µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.684899325Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.685521375Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=622µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.687132796Z level=info msg="Executing migration" id="create dashboard_provisioning"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.687777577Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=644.621µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.689673137Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.693921504Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.246137ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.695305178Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.695874036Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=569.089µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.697597881Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.698254652Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=656.411µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.699841242Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.700490023Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=648.031µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.702404435Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.702696674Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=291.929µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.704166781Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.704666828Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=499.827µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.706340911Z level=info msg="Executing migration" id="Add check_sum column"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.707832329Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.491068ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.709252334Z level=info msg="Executing migration" id="Add index for dashboard_title"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.709906515Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=653.961µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.711730853Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.711860278Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=129.595µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.71694666Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.717103715Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=157.205µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.71848915Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.71911562Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=626.3µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.720846446Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.722424146Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.57748ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.725169034Z level=info msg="Executing migration" id="create data_source table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.725888557Z level=info msg="Migration successfully executed" id="create data_source table" duration=719.324µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.7275273Z level=info msg="Executing migration" id="add index data_source.account_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.7281913Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=661.29µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.730163044Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.730744222Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=580.488µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.732979814Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.7338053Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=824.996µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.735994141Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.736582469Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=587.898µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.739909656Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.744317948Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.409241ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.745878947Z level=info msg="Executing migration" id="create data_source table v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.746731895Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=852.438µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.748254363Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.749052619Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=797.546µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.750748763Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.751430535Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=682.242µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.753539352Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.754041128Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=501.846µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.755609369Z level=info msg="Executing migration" id="Add column with_credentials"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.757500469Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.89047ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.759289176Z level=info msg="Executing migration" id="Add secure json data column"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.761012872Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.724076ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.762669955Z level=info msg="Executing migration" id="Update data_source table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.762694176Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=24.911µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.764373589Z level=info msg="Executing migration" id="Update initial version to 1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.764534184Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=162.995µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.766099675Z level=info msg="Executing migration" id="Add read_only data column"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.767745137Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.645292ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.769439492Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.769591137Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=151.795µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.771184708Z level=info msg="Executing migration" id="Update json_data with nulls"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.771337022Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=152.584µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.773072648Z level=info msg="Executing migration" id="Add uid column"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.774871136Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.798088ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.776452906Z level=info msg="Executing migration" id="Update uid value"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.776601081Z level=info msg="Migration successfully executed" id="Update uid value" duration=148.355µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.778283854Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.778943606Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=662.332µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.780585859Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.781343953Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=758.343µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.783482002Z level=info msg="Executing migration" id="create api_key table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.784153152Z level=info msg="Migration successfully executed" id="create api_key table" duration=670.85µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.786301392Z level=info msg="Executing migration" id="add index api_key.account_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.786912621Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=609.829µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.791355684Z level=info msg="Executing migration" id="add index api_key.key"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.791967744Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=612.279µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.795272729Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.795950951Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=678.261µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.798918016Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.799632529Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=715.684µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.805241148Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.806045783Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=804.915µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.808528544Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.809700741Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.172627ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.812036195Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.820521777Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.480572ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.823806602Z level=info msg="Executing migration" id="create api_key table v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.824570357Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=763.845µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.827882503Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.828617566Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=735.853µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.830228328Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.8308873Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=658.952µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.834205795Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.834830966Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=626.45µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.837595834Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.837893624Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=297.9µs
Oct  9 11:01:37 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.839458494Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.840152856Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=694.322µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.842954426Z level=info msg="Executing migration" id="Update api_key table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.842978566Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.331µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.844743003Z level=info msg="Executing migration" id="Add expires to api_key table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.846742067Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.999094ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.849192205Z level=info msg="Executing migration" id="Add service account foreign key"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.851020414Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.827669ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.852382908Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.852511692Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=129.314µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.854591598Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.856614473Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.022895ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.858155083Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.859980281Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.824878ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.861616324Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.862255403Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=638.889µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.863874996Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.864407073Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=531.377µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.865991803Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.866702526Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=710.603µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.868560366Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.86929755Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=737.443µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.873271826Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.874081172Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=810.186µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.879285089Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.880127656Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=833.497µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.885074104Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.885183768Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=113.654µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.887527453Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.887557014Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=31.741µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.890423926Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.893322348Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.900143ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.89552896Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.898243257Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.712957ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.901100678Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.901226861Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=127.373µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.904382943Z level=info msg="Executing migration" id="create quota table v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.905366824Z level=info msg="Migration successfully executed" id="create quota table v1" duration=980.751µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.907960367Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.908814025Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=855.278µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.910653454Z level=info msg="Executing migration" id="Update quota table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.910683695Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=30.921µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.912131962Z level=info msg="Executing migration" id="create plugin_setting table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.912786142Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=651.191µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.914954652Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.915586751Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=632.129µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.917322898Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.919324391Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.001443ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.920790518Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.920812349Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=19.87µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.922361429Z level=info msg="Executing migration" id="create session table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.923063182Z level=info msg="Migration successfully executed" id="create session table" duration=701.602µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.924964732Z level=info msg="Executing migration" id="Drop old table playlist table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.925035544Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=71.212µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.926660826Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.926731338Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=70.662µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.928511476Z level=info msg="Executing migration" id="create playlist table v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.929136986Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=625.36µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.931853303Z level=info msg="Executing migration" id="create playlist item table v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.932432771Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=579.328µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.934489877Z level=info msg="Executing migration" id="Update playlist table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.934512298Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=23.131µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.935962384Z level=info msg="Executing migration" id="Update playlist_item table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.935995395Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=33.931µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.937507604Z level=info msg="Executing migration" id="Add playlist column created_at"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.939748106Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.240232ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.941427769Z level=info msg="Executing migration" id="Add playlist column updated_at"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.944098355Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.667965ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.945611194Z level=info msg="Executing migration" id="drop preferences table v2"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.945688496Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=77.122µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.94739512Z level=info msg="Executing migration" id="drop preferences table v3"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.947465613Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=69.233µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.94894905Z level=info msg="Executing migration" id="create preferences table v3"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.950022834Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.073384ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.95206771Z level=info msg="Executing migration" id="Update preferences table charset"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.95208815Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=21.11µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.953452145Z level=info msg="Executing migration" id="Add column team_id in preferences"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.955747108Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.294504ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.957269366Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.95738746Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=117.894µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.960571392Z level=info msg="Executing migration" id="Add column week_start in preferences"
Oct  9 11:01:37 compute-0 podman[29086]: 2025-10-09 11:01:37.961196882 +0000 UTC m=+0.049350111 container create 9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071 (image=quay.io/ceph/haproxy:2.3, name=sharp_bhabha)
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.96300444Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.432358ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.965293684Z level=info msg="Executing migration" id="Add column preferences.json_data"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.968573628Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.283834ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.970568533Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.970629185Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=61.272µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.972595407Z level=info msg="Executing migration" id="Add preferences index org_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.973450355Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=854.488µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.975399587Z level=info msg="Executing migration" id="Add preferences index user_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.976198613Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=798.016µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.978361832Z level=info msg="Executing migration" id="create alert table v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.979356034Z level=info msg="Migration successfully executed" id="create alert table v1" duration=993.672µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.981526724Z level=info msg="Executing migration" id="add index alert org_id & id "
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.982402531Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=874.668µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.985878753Z level=info msg="Executing migration" id="add index alert state"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.986645767Z level=info msg="Migration successfully executed" id="add index alert state" duration=767.555µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.988840647Z level=info msg="Executing migration" id="add index alert dashboard_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.989905282Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.068395ms
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.991908096Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.992529946Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=621.84µs
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.99483943Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.995623295Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=787.615µs
Oct  9 11:01:37 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.997631549Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Oct  9 11:01:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.998515748Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=884.959µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:37.999951313Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Oct  9 11:01:38 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.009588832Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.628069ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.012542067Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.01360567Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.068283ms
Oct  9 11:01:38 compute-0 systemd[1]: Started libpod-conmon-9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071.scope.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.015385568Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.016197263Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=812.195µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.01952674Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.019914583Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=390.273µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.021650668Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.022267928Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=618.12µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.023801057Z level=info msg="Executing migration" id="create alert_notification table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.024472288Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=671.211µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.026005388Z level=info msg="Executing migration" id="Add column is_default"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.028600781Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.595333ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.030265944Z level=info msg="Executing migration" id="Add column frequency"
Oct  9 11:01:38 compute-0 podman[29086]: 2025-10-09 11:01:37.936635726 +0000 UTC m=+0.024788975 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.033017522Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.753708ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.034861352Z level=info msg="Executing migration" id="Add column send_reminder"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.037829577Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.966925ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.039277663Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.041778323Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.498889ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.043694885Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.044355845Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=660.92µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.046412052Z level=info msg="Executing migration" id="Update alert table charset"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.046434662Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=22.96µs
Oct  9 11:01:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.047952661Z level=info msg="Executing migration" id="Update alert_notification table charset"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.047974201Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=22.01µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.049581602Z level=info msg="Executing migration" id="create notification_journal table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.050186862Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=607.11µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.052820976Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.053487758Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=664.512µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.05544833Z level=info msg="Executing migration" id="drop alert_notification_journal"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.056190324Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=739.114µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.057584249Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.058218379Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=633.59µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.059654705Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.060351408Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=697.363µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.061906687Z level=info msg="Executing migration" id="Add for to alert table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.064626724Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.719687ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.066141833Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Oct  9 11:01:38 compute-0 podman[29086]: 2025-10-09 11:01:38.066704781 +0000 UTC m=+0.154858030 container init 9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071 (image=quay.io/ceph/haproxy:2.3, name=sharp_bhabha)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.069017456Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.875442ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.070844444Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.071007009Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=163.195µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.072999393Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.073680165Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=681.122µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.076016269Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.076894368Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=879.329µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.078875781Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Oct  9 11:01:38 compute-0 podman[29086]: 2025-10-09 11:01:38.079031046 +0000 UTC m=+0.167184275 container start 9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071 (image=quay.io/ceph/haproxy:2.3, name=sharp_bhabha)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.081824045Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.949994ms
Oct  9 11:01:38 compute-0 podman[29086]: 2025-10-09 11:01:38.082390324 +0000 UTC m=+0.170543583 container attach 9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071 (image=quay.io/ceph/haproxy:2.3, name=sharp_bhabha)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.083114207Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.083165598Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=51.382µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.084911004Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.085601247Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=689.683µs
Oct  9 11:01:38 compute-0 sharp_bhabha[29102]: 0 0
Oct  9 11:01:38 compute-0 systemd[1]: libpod-9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071.scope: Deactivated successfully.
Oct  9 11:01:38 compute-0 podman[29086]: 2025-10-09 11:01:38.08729446 +0000 UTC m=+0.175447689 container died 9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071 (image=quay.io/ceph/haproxy:2.3, name=sharp_bhabha)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.087014931Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.087981743Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=966.882µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.090554775Z level=info msg="Executing migration" id="Drop old annotation table v4"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.090656268Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=102.153µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.092195267Z level=info msg="Executing migration" id="create annotation table v5"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.09319864Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.003843ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.095456502Z level=info msg="Executing migration" id="add index annotation 0 v3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.096378602Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=922.279µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.098389206Z level=info msg="Executing migration" id="add index annotation 1 v3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.09914152Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=752.204µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.101759954Z level=info msg="Executing migration" id="add index annotation 2 v3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.102522979Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=762.964µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.105411291Z level=info msg="Executing migration" id="add index annotation 3 v3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.10634882Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=937.859µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.108336665Z level=info msg="Executing migration" id="add index annotation 4 v3"
Oct  9 11:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0066ca2e8fcf8a8a49a4ccd66f46cf9906e20ee4ecfa3a56b4d612b9c3e789de-merged.mount: Deactivated successfully.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.109316516Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=979.81µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.112385214Z level=info msg="Executing migration" id="Update annotation table charset"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.112458596Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=75.472µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.114796791Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.118576112Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.778061ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.122685653Z level=info msg="Executing migration" id="Drop category_id index"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.123616704Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=932.471µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.125539695Z level=info msg="Executing migration" id="Add column tags to annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.129078228Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.536323ms
Oct  9 11:01:38 compute-0 podman[29086]: 2025-10-09 11:01:38.129277905 +0000 UTC m=+0.217431144 container remove 9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071 (image=quay.io/ceph/haproxy:2.3, name=sharp_bhabha)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.131876158Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.132603152Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=728.523µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.1347329Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.135545986Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=813.207µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.137718355Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.13849892Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=780.635µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.140476874Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Oct  9 11:01:38 compute-0 systemd[1]: libpod-conmon-9659c6483d403d7dcbcd871d44bfcc70322e0ff3ff8fb263d55f92e665798071.scope: Deactivated successfully.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.149495323Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.017849ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.151354122Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.152146167Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=789.305µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.153945665Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.155167344Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.24497ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.157397405Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.157702555Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=303.08µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.159232914Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.159854274Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=621.35µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.161897649Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.162134997Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=244.328µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.163828092Z level=info msg="Executing migration" id="Add created time to annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.16784311Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.012818ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.169739791Z level=info msg="Executing migration" id="Add updated time to annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.173119479Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.378257ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.174690179Z level=info msg="Executing migration" id="Add index for created in annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.175441064Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=751.005µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.177093256Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.177847871Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=754.285µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.180125693Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.180356491Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=230.808µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.182243901Z level=info msg="Executing migration" id="Add epoch_end column"
Oct  9 11:01:38 compute-0 systemd[1]: Reloading.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.185365571Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.1213ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.187043074Z level=info msg="Executing migration" id="Add index for epoch_end"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.187774538Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=733.024µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.189966678Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.190154214Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=185.816µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.191830928Z level=info msg="Executing migration" id="Move region to single row"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.192465248Z level=info msg="Migration successfully executed" id="Move region to single row" duration=637.67µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.19468892Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.195597508Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=908.918µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.197229031Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.198099188Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=872.857µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.199671829Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.200433804Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=761.795µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.202539071Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.203389589Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=850.128µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.205454185Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.206316862Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=862.587µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.207971465Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.208786852Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=812.816µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.211524159Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.211621442Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=97.543µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.215571178Z level=info msg="Executing migration" id="create test_data table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.216627692Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.056004ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.218942776Z level=info msg="Executing migration" id="create dashboard_version table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.219701451Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=758.385µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.222580233Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.223414809Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=835.966µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.225558379Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.226433606Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=875.468µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.228440921Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.22871269Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=271.789µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.230685613Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.231054935Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=369.492µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.233053418Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.233158872Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=105.874µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.234791423Z level=info msg="Executing migration" id="create team table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.235734854Z level=info msg="Migration successfully executed" id="create team table" duration=941.631µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.242128799Z level=info msg="Executing migration" id="add index team.org_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.243201873Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.077224ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.24653751Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.247654896Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.117756ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.250328432Z level=info msg="Executing migration" id="Add column uid in team"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.253577406Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.248714ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.255944011Z level=info msg="Executing migration" id="Update uid column values in team"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.256131037Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=184.296µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.257853192Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.258640768Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=787.316µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.260640471Z level=info msg="Executing migration" id="create team member table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.261304553Z level=info msg="Migration successfully executed" id="create team member table" duration=664.012µs
Oct  9 11:01:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.265455555Z level=info msg="Executing migration" id="add index team_member.org_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.266578492Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.124907ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.268745281Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.26963724Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=892.339µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.271553411Z level=info msg="Executing migration" id="add index team_member.team_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.272411119Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=857.208µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.274612939Z level=info msg="Executing migration" id="Add column email to team table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.278409691Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.796802ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.280187907Z level=info msg="Executing migration" id="Add column external to team_member table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.283904677Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.71533ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.285759516Z level=info msg="Executing migration" id="Add column permission to team_member table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.289725383Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.964547ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.29180948Z level=info msg="Executing migration" id="create dashboard acl table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.292996768Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.186568ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.295222719Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.296233341Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.009952ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.298464713Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.299775965Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.315582ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.311258343Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.312670968Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.415086ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.315035634Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.316123218Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.088384ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.318384Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.319520457Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.141147ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.32178913Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.322590036Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=801.306µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.324099643Z level=info msg="Executing migration" id="add index dashboard_permission"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.324906069Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=805.976µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.327135951Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.327619716Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=483.885µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.329166026Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.329436775Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=271.539µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.331195421Z level=info msg="Executing migration" id="create tag table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.332124421Z level=info msg="Migration successfully executed" id="create tag table" duration=929.44µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.334368953Z level=info msg="Executing migration" id="add index tag.key_value"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.335282381Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=913.198µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.337185303Z level=info msg="Executing migration" id="create login attempt table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.337882276Z level=info msg="Migration successfully executed" id="create login attempt table" duration=696.823µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.339681383Z level=info msg="Executing migration" id="add index login_attempt.username"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.340604043Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=922.53µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.342742121Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.343606048Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=863.687µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.345481049Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.357030249Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.543309ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.359469027Z level=info msg="Executing migration" id="create login_attempt v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.360397196Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=928.699µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.362460233Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.363597018Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.136176ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.366190711Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.366670537Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=481.126µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.36861239Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.369385314Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=769.974µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.371166051Z level=info msg="Executing migration" id="create user auth table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.372017478Z level=info msg="Migration successfully executed" id="create user auth table" duration=851.427µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.373765715Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.374792997Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.027863ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.377198304Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.377516434Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=319.56µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.379635493Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.38485684Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.216696ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.386920626Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.391962487Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.007901ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.394152727Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.399032134Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.874526ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.401476452Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.406530834Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.047871ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.409145277Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.410261073Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.117196ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.412854886Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.418059293Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.196486ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.420010935Z level=info msg="Executing migration" id="create server_lock table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.421005347Z level=info msg="Migration successfully executed" id="create server_lock table" duration=993.852µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.423187957Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.424345804Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.159227ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.42891167Z level=info msg="Executing migration" id="create user auth token table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.430101099Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.191029ms
Oct  9 11:01:38 compute-0 ceph-mon[4705]: Deploying daemon haproxy.rgw.default.compute-0.kuntxb on compute-0
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.434073055Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.435263414Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.191259ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.441623647Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.443024292Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.402755ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.445370897Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.446534395Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.163898ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.449031395Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.454447878Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.407883ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.45700718Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.458006322Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=998.932µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.460302096Z level=info msg="Executing migration" id="create cache_data table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.461108481Z level=info msg="Migration successfully executed" id="create cache_data table" duration=806.076µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.463363294Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.464491359Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.127795ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.466834024Z level=info msg="Executing migration" id="create short_url table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.467754944Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=919.46µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.470176712Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.471122092Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=944.481µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.473435006Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.473494988Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=59.612µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.477771495Z level=info msg="Executing migration" id="delete alert_definition table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.477877739Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=105.634µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.480265055Z level=info msg="Executing migration" id="recreate alert_definition table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.481180334Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=915.119µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.483962453Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.484965965Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=998.732µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.48731571Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.488250861Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=935.431µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.490706519Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.490768671Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=62.532µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.492698613Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.4935456Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=848.767µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.496252207Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.497258479Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.002722ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.49914622Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.500122461Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=975.882µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.501843426Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.502773786Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=930.16µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.504450349Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.508985994Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.534885ms
Oct  9 11:01:38 compute-0 systemd[1]: Reloading.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.510470752Z level=info msg="Executing migration" id="drop alert_definition table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.511377552Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=906.58µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.513666105Z level=info msg="Executing migration" id="delete alert_definition_version table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.513880202Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=214.677µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.515809083Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.516724153Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=915.14µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.518660614Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.519697698Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.037014ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.521561207Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.522718504Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.156787ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.524516252Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.524717019Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=197.926µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.526519546Z level=info msg="Executing migration" id="drop alert_definition_version table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.527623552Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.104476ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.529547463Z level=info msg="Executing migration" id="create alert_instance table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.530614148Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.066954ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.532481267Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.534309685Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.827558ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.536236398Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.53726048Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.024753ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.54130723Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.546578448Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.264968ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.548550931Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.549768251Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.21952ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.551578668Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.552605372Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.026544ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.554518923Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.580322639Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.796757ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.582616872Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Oct  9 11:01:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.60813918Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=25.459266ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.611076594Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.612571181Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.494887ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.614523035Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.615515526Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=992.132µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.618835172Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.624663789Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.824547ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.626874629Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.632363225Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.487106ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.634827374Z level=info msg="Executing migration" id="create alert_rule table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.636181118Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.354274ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.638706439Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.639891217Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.187328ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.642126888Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.643022137Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=900.149µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.64499396Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.645978471Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=984.281µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.647819201Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.647956575Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=138.084µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.650165306Z level=info msg="Executing migration" id="add column for to alert_rule"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.654495974Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.330448ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.656115446Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.66030198Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.186584ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.662080458Z level=info msg="Executing migration" id="add column labels to alert_rule"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.667699887Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.61652ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.669708782Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.670641942Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=934.451µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.672321105Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.673209234Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=890.819µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.674896488Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.679159995Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.263667ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.680861879Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.685479616Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.617967ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.687229573Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.688307187Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.077234ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.690235919Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.694888308Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.652609ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.696800139Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.702101918Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.30835ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.704333781Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.704434384Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=101.553µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.706243462Z level=info msg="Executing migration" id="create alert_rule_version table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.70741649Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.173087ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.709606769Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.71058569Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=978.801µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.712578155Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.713746562Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.168336ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.716776229Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.716939024Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=164.055µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.71932666Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.724378512Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.045432ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.726195551Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.730898291Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.70052ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.73306512Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.738744282Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.676972ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.740955414Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.747608346Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.653903ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.749768205Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.755831939Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.061464ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.757965377Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.758081661Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=116.904µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.759948962Z level=info msg="Executing migration" id=create_alert_configuration_table
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.760765877Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=816.905µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.763243297Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.767909656Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.665449ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.770155018Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.770262552Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=108.074µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.772377709Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.777789332Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.409243ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:38 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.779741625Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.780707836Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=964.481µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.783171985Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.788150235Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.97274ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.790556312Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.79143676Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=880.838µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.793740044Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.794638292Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=898.418µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.796721349Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.80143022Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.708481ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.803338521Z level=info msg="Executing migration" id="create provenance_type table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.804207069Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=868.618µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.807963079Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.809078164Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.115235ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.811011277Z level=info msg="Executing migration" id="create alert_image table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.811722919Z level=info msg="Migration successfully executed" id="create alert_image table" duration=711.762µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.813813227Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.814672494Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=858.656µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.816703468Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.816814072Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=111.484µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.818800146Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.819766367Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=966.381µs
Oct  9 11:01:38 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.kuntxb for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.821917126Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.822894827Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=978.771µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.825358616Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.825693066Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.827157744Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.827582268Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=424.434µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.828987952Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.830098608Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.110266ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.831771031Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.838142156Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.368034ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.840406958Z level=info msg="Executing migration" id="create library_element table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.841528834Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.120066ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.843635281Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.845076787Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.440936ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.846962368Z level=info msg="Executing migration" id="create library_element_connection table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.84766053Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=698.272µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.849796619Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.850682447Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=885.408µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.85231581Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.853191857Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=875.828µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.854978425Z level=info msg="Executing migration" id="increase max description length to 2048"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.855062047Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=84.282µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.856514963Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.856626087Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=111.564µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.858332732Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.858692793Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=360.161µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.860681737Z level=info msg="Executing migration" id="create data_keys table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.861698059Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.016222ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.863600261Z level=info msg="Executing migration" id="create secrets table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:38 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.864551541Z level=info msg="Migration successfully executed" id="create secrets table" duration=950.85µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.866578676Z level=info msg="Executing migration" id="rename data_keys name column to id"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.893342653Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=26.759157ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.895522463Z level=info msg="Executing migration" id="add name column into data_keys"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.901459933Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.93623ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.903178749Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.903354734Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=176.436µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.904817321Z level=info msg="Executing migration" id="rename data_keys name column to label"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.933242691Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.420191ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.9353958Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Oct  9 11:01:38 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.965594517Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.192836ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.967624852Z level=info msg="Executing migration" id="create kv_store table v1"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.969010936Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.287491ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.97255809Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.973691616Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.133676ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.975591557Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.975831135Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=239.308µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.977366784Z level=info msg="Executing migration" id="create permission table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.978201021Z level=info msg="Migration successfully executed" id="create permission table" duration=832.557µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.980174554Z level=info msg="Executing migration" id="add unique index permission.role_id"
Oct  9 11:01:38 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.981074293Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=899.629µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.98285806Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.98380221Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=944.11µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.98568181Z level=info msg="Executing migration" id="create role table"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.986477906Z level=info msg="Migration successfully executed" id="create role table" duration=796.756µs
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.98817374Z level=info msg="Executing migration" id="add column display_name"
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.994052478Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.878608ms
Oct  9 11:01:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:38.995845135Z level=info msg="Executing migration" id="add column group_name"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.00222033Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.374384ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.004184033Z level=info msg="Executing migration" id="add index role.org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.005456353Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.2721ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.007794548Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.0090852Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.291022ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.011624101Z level=info msg="Executing migration" id="add index role_org_id_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.012748386Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.123965ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.016807247Z level=info msg="Executing migration" id="create team role table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.017631124Z level=info msg="Migration successfully executed" id="create team role table" duration=823.866µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.019633247Z level=info msg="Executing migration" id="add index team_role.org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.020719403Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.088176ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.022999125Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.024733391Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.756165ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.027315963Z level=info msg="Executing migration" id="add index team_role.team_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.028648456Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.333443ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.031080354Z level=info msg="Executing migration" id="create user role table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.032037525Z level=info msg="Migration successfully executed" id="create user role table" duration=957.451µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.034409441Z level=info msg="Executing migration" id="add index user_role.org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.035460654Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.050743ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.039905997Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.041205558Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.299811ms
Oct  9 11:01:39 compute-0 podman[29247]: 2025-10-09 11:01:39.041727685 +0000 UTC m=+0.043381810 container create 5dcfab3a69011f24d8252b5357584aee7eba52ee7c786fb0c5b06a8943ea92cc (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-0-kuntxb)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.043077028Z level=info msg="Executing migration" id="add index user_role.user_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.044144552Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.067344ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.04781338Z level=info msg="Executing migration" id="create builtin role table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.048795842Z level=info msg="Migration successfully executed" id="create builtin role table" duration=982.312µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.051319122Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.053571374Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=2.238452ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.056473837Z level=info msg="Executing migration" id="add index builtin_role.name"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.057539651Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.065594ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.059493714Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.065985381Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.490707ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.068286366Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.069534855Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.250819ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.071653243Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.072642955Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=989.772µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.074576177Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.075507247Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=930.111µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.077518021Z level=info msg="Executing migration" id="add unique index role.uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.078506333Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=987.912µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.080060552Z level=info msg="Executing migration" id="create seed assignment table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.080739994Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=679.692µs
Oct  9 11:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf20d9134dced92d41eb6beb539ed3d7ca41c71d452c014a50e66f00a6846a4/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.082861902Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.083752991Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=888.668µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.085776775Z level=info msg="Executing migration" id="add column hidden to role table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.092974696Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.14806ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.095203787Z level=info msg="Executing migration" id="permission kind migration"
Oct  9 11:01:39 compute-0 podman[29247]: 2025-10-09 11:01:39.095737995 +0000 UTC m=+0.097392140 container init 5dcfab3a69011f24d8252b5357584aee7eba52ee7c786fb0c5b06a8943ea92cc (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-0-kuntxb)
Oct  9 11:01:39 compute-0 podman[29247]: 2025-10-09 11:01:39.100553099 +0000 UTC m=+0.102207224 container start 5dcfab3a69011f24d8252b5357584aee7eba52ee7c786fb0c5b06a8943ea92cc (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-0-kuntxb)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.102824461Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.615674ms
Oct  9 11:01:39 compute-0 bash[29247]: 5dcfab3a69011f24d8252b5357584aee7eba52ee7c786fb0c5b06a8943ea92cc
Oct  9 11:01:39 compute-0 podman[29247]: 2025-10-09 11:01:39.02440294 +0000 UTC m=+0.026057085 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.104800995Z level=info msg="Executing migration" id="permission attribute migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.11121975Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.416165ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-0-kuntxb[29263]: [NOTICE] 281/110139 (2) : New worker #1 (4) forked
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.11337626Z level=info msg="Executing migration" id="permission identifier migration"
Oct  9 11:01:39 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.kuntxb for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:39 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:39 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.002000065s ======
Oct  9 11:01:39 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:39.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.119806855Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.425265ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.1214892Z level=info msg="Executing migration" id="add permission identifier index"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.12245367Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=964.33µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.124098693Z level=info msg="Executing migration" id="add permission action scope role_id index"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.125020882Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=921.899µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.126591283Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.127502131Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=910.608µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.129081042Z level=info msg="Executing migration" id="create query_history table v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.129848267Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=766.305µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.131386886Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.132308196Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=921.49µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.134080942Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.134163645Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=83.213µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.135536659Z level=info msg="Executing migration" id="rbac disabled migrator"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.135598511Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=62.442µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.136957994Z level=info msg="Executing migration" id="teams permissions migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.137335267Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=376.023µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.138672429Z level=info msg="Executing migration" id="dashboard permissions"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.139138134Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=466.375µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.140536959Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.141104908Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=568.269µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.143308818Z level=info msg="Executing migration" id="drop managed folder create actions"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.143518295Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=209.667µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.144972881Z level=info msg="Executing migration" id="alerting notification permissions"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.145396615Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=421.524µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.146819051Z level=info msg="Executing migration" id="create query_history_star table v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.147509532Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=691.511µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.149421904Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.150905321Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.484467ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.152983307Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Oct  9 11:01:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.15898887Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.005753ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.160487458Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.160581601Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=94.243µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.163391691Z level=info msg="Executing migration" id="create correlation table v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.164554608Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.161087ms
Oct  9 11:01:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.167874285Z level=info msg="Executing migration" id="add index correlations.uid"
Oct  9 11:01:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.168850666Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=974.412µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.170465197Z level=info msg="Executing migration" id="add index correlations.source_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.171294924Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=829.757µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.172867204Z level=info msg="Executing migration" id="add correlation config column"
Oct  9 11:01:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.179340302Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.469428ms
Oct  9 11:01:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.181060537Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.182123521Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.063484ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.183475254Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.184500387Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.024453ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.186081018Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Oct  9 11:01:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:39 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.zdhryc on compute-2
Oct  9 11:01:39 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.zdhryc on compute-2
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.202978779Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=16.893231ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.205219951Z level=info msg="Executing migration" id="create correlation v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.206383338Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.163246ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.207919757Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.209033342Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.113605ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.210858321Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.211867783Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.009612ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.214022763Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.21490075Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=878.127µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.216838733Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.217113751Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=274.958µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.220066536Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.220966985Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=902.27µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.222333959Z level=info msg="Executing migration" id="add provisioning column"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.228111234Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.778085ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.229704645Z level=info msg="Executing migration" id="create entity_events table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.230428758Z level=info msg="Migration successfully executed" id="create entity_events table" duration=724.233µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.231982387Z level=info msg="Executing migration" id="create dashboard public config v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.232877426Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=894.719µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.234725865Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.235091927Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.236491072Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.236852623Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.238226338Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.238972272Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=745.814µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.240436218Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.241312287Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=876.119µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.242905607Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.243834607Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=915.17µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.245969405Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.246908946Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=939.271µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.24862935Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.249472818Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=843.878µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.250788629Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.251574475Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=785.906µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.252875587Z level=info msg="Executing migration" id="Drop public config table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.253574019Z level=info msg="Migration successfully executed" id="Drop public config table" duration=697.892µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.254893161Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.25576999Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=878.809µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.257128023Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.25796311Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=834.537µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.25925402Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.260130139Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=875.459µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.261451752Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.262263757Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=811.855µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.263850618Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.284262271Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.393073ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.285838632Z level=info msg="Executing migration" id="add annotations_enabled column"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.292370342Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.5299ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.293941972Z level=info msg="Executing migration" id="add time_selection_enabled column"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.299779909Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.837897ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.30140317Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.301586426Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=183.516µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.303076555Z level=info msg="Executing migration" id="add share column"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.308903231Z level=info msg="Migration successfully executed" id="add share column" duration=5.826436ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.310548073Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.310706939Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=158.946µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.312174596Z level=info msg="Executing migration" id="create file table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.312983421Z level=info msg="Migration successfully executed" id="create file table" duration=798.775µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.314883983Z level=info msg="Executing migration" id="file table idx: path natural pk"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.315709229Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=825.166µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.317103664Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.317888328Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=784.324µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.31947893Z level=info msg="Executing migration" id="create file_meta table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.32011129Z level=info msg="Migration successfully executed" id="create file_meta table" duration=632.24µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.322590709Z level=info msg="Executing migration" id="file table idx: path key"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.323395584Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=804.955µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.324998876Z level=info msg="Executing migration" id="set path collation in file table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.325058188Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=57.312µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.326531645Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.326577147Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=44.311µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.328116276Z level=info msg="Executing migration" id="managed permissions migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.328482918Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=366.642µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.329834621Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.330004977Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=170.345µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.33166391Z level=info msg="Executing migration" id="RBAC action name migrator"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.332689762Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.025172ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.334232952Z level=info msg="Executing migration" id="Add UID column to playlist"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.340110871Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=5.874129ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.341838756Z level=info msg="Executing migration" id="Update uid column values in playlist"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.3419836Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=145.414µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.343347434Z level=info msg="Executing migration" id="Add index for uid in playlist"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.344292374Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=944.46µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.346233786Z level=info msg="Executing migration" id="update group index for alert rules"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.346533736Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=300.28µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.347986183Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.348147618Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=162.695µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.349661636Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.350031558Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=371.292µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.351501665Z level=info msg="Executing migration" id="add action column to seed_assignment"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.357409654Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=5.906849ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.358882632Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.364779601Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=5.896328ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.366278308Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.367298931Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.019933ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.368974954Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.440283858Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=71.304064ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.442091056Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.443175961Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.088125ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.44471172Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.445685151Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=972.561µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.448252833Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.467992286Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=19.738353ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.470205996Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.476837779Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.628933ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.478536073Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.478774651Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=239.558µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.480422743Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.480558568Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=135.884µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.48187704Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.482046376Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=169.956µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.484081931Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.484266767Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=185.126µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.485800685Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.485975571Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=173.956µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.487393387Z level=info msg="Executing migration" id="create folder table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.488199783Z level=info msg="Migration successfully executed" id="create folder table" duration=805.846µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.489597357Z level=info msg="Executing migration" id="Add index for parent_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.490575219Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=977.572µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.492242842Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.493189023Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=945.91µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.494784223Z level=info msg="Executing migration" id="Update folder title length"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.494807714Z level=info msg="Migration successfully executed" id="Update folder title length" duration=22.511µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.496297712Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.49717501Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=876.458µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.498777842Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.499762152Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=984.021µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.50124322Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.502217792Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=974.472µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.504138283Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.504500865Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=362.452µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.505904679Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.506122706Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=217.977µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.507828042Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.508674638Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=846.846µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.510074573Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.51090763Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=832.837µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.513342528Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.514306488Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=963.66µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.517307184Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.51840692Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.097336ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.520343942Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.52120531Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=858.708µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.522766539Z level=info msg="Executing migration" id="create anon_device table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.523526614Z level=info msg="Migration successfully executed" id="create anon_device table" duration=759.725µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.525203727Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.526171649Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=967.541µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.530747435Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.531810209Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.061284ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.533816513Z level=info msg="Executing migration" id="create signing_key table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.5346323Z level=info msg="Migration successfully executed" id="create signing_key table" duration=815.686µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.536450147Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.537334976Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=884.389µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.538998569Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.539903958Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=905.789µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.541310904Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.541534451Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=224.137µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.543142542Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.550342562Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.19788ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.552050468Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.552608586Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=558.658µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.554210806Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.555059854Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=847.498µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.55713719Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.558269827Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.132567ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.559748574Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.560824819Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.075735ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.562411949Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.563334589Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=925.16µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.565232839Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.566310144Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.077545ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.56805376Z level=info msg="Executing migration" id="create sso_setting table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.569055322Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.000882ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.571949525Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.572680028Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=731.222µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.574787276Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.575054624Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=268.058µs
Oct  9 11:01:39 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:39 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:39 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.578078451Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.578127913Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=51.572µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.579351312Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.585756677Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.405105ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.58742232Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.593906868Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.484428ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.59553744Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.595818309Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=280.909µs
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=migrator t=2025-10-09T11:01:39.597441181Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.151862651s
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=sqlstore t=2025-10-09T11:01:39.598585387Z level=info msg="Created default organization"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=secrets t=2025-10-09T11:01:39.600309743Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:39 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=plugin.store t=2025-10-09T11:01:39.623120553Z level=info msg="Loading plugins..."
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=local.finder t=2025-10-09T11:01:39.697409562Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=plugin.store t=2025-10-09T11:01:39.697445314Z level=info msg="Plugins loaded" count=55 duration=74.32423ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=query_data t=2025-10-09T11:01:39.700160971Z level=info msg="Query Service initialization"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=live.push_http t=2025-10-09T11:01:39.703183127Z level=info msg="Live Push Gateway initialization"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.migration t=2025-10-09T11:01:39.706362519Z level=info msg=Starting
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.migration t=2025-10-09T11:01:39.706736371Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.migration orgID=1 t=2025-10-09T11:01:39.707062701Z level=info msg="Migrating alerts for organisation"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.migration orgID=1 t=2025-10-09T11:01:39.707631009Z level=info msg="Alerts found to migrate" alerts=0
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.migration t=2025-10-09T11:01:39.709213821Z level=info msg="Completed alerting migration"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.state.manager t=2025-10-09T11:01:39.72886073Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=infra.usagestats.collector t=2025-10-09T11:01:39.730802402Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=provisioning.datasources t=2025-10-09T11:01:39.731919468Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=provisioning.alerting t=2025-10-09T11:01:39.741433672Z level=info msg="starting to provision alerting"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=provisioning.alerting t=2025-10-09T11:01:39.741450682Z level=info msg="finished to provision alerting"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=grafanaStorageLogger t=2025-10-09T11:01:39.741751932Z level=info msg="Storage starting"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.state.manager t=2025-10-09T11:01:39.742552388Z level=info msg="Warming state cache for startup"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.multiorg.alertmanager t=2025-10-09T11:01:39.743338993Z level=info msg="Starting MultiOrg Alertmanager"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=http.server t=2025-10-09T11:01:39.744898643Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=http.server t=2025-10-09T11:01:39.745397149Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=sqlstore.transactions t=2025-10-09T11:01:39.768158008Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.state.manager t=2025-10-09T11:01:39.773233081Z level=info msg="State cache has been initialized" states=0 duration=30.679403ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ngalert.scheduler t=2025-10-09T11:01:39.773272572Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ticker t=2025-10-09T11:01:39.773321393Z level=info msg=starting first_tick=2025-10-09T11:01:40Z
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=provisioning.dashboard t=2025-10-09T11:01:39.774677936Z level=info msg="starting to provision dashboards"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=plugins.update.checker t=2025-10-09T11:01:39.816182145Z level=info msg="Update check succeeded" duration=74.33193ms
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=grafana.update.checker t=2025-10-09T11:01:39.817669394Z level=info msg="Update check succeeded" duration=75.052483ms
Oct  9 11:01:39 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=sqlstore.transactions t=2025-10-09T11:01:39.859619447Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:01:39.896Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003926644s
Oct  9 11:01:39 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=grafana-apiserver t=2025-10-09T11:01:39.912407137Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct  9 11:01:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=grafana-apiserver t=2025-10-09T11:01:39.912971806Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct  9 11:01:39 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  9 11:01:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=provisioning.dashboard t=2025-10-09T11:01:40.005746977Z level=info msg="finished to provision dashboards"
Oct  9 11:01:40 compute-0 python3[29308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.080119458 +0000 UTC m=+0.038959189 container create 5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06 (image=quay.io/ceph/ceph:v19, name=elated_wright, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:01:40 compute-0 systemd[1]: Started libpod-conmon-5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06.scope.
Oct  9 11:01:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c92c7f0d94d985d92e79ae845fedbf5218a9f3b7cc6ad83c89a492f985cdb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c92c7f0d94d985d92e79ae845fedbf5218a9f3b7cc6ad83c89a492f985cdb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.15917385 +0000 UTC m=+0.118013601 container init 5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06 (image=quay.io/ceph/ceph:v19, name=elated_wright, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.065583132 +0000 UTC m=+0.024422893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.165535704 +0000 UTC m=+0.124375435 container start 5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06 (image=quay.io/ceph/ceph:v19, name=elated_wright, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.168206349 +0000 UTC m=+0.127046080 container attach 5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06 (image=quay.io/ceph/ceph:v19, name=elated_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 22 completed events
Oct  9 11:01:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:01:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:40 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 11:01:40 compute-0 ceph-mon[4705]: Deploying daemon haproxy.rgw.default.compute-2.zdhryc on compute-2
Oct  9 11:01:40 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:40 compute-0 elated_wright[29324]: could not fetch user info: no user info saved
Oct  9 11:01:40 compute-0 systemd[1]: libpod-5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06.scope: Deactivated successfully.
Oct  9 11:01:40 compute-0 conmon[29324]: conmon 5dcd3de9f538b807674f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06.scope/container/memory.events
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.671893059 +0000 UTC m=+0.630732810 container died 5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06 (image=quay.io/ceph/ceph:v19, name=elated_wright, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0c92c7f0d94d985d92e79ae845fedbf5218a9f3b7cc6ad83c89a492f985cdb-merged.mount: Deactivated successfully.
Oct  9 11:01:40 compute-0 podman[29309]: 2025-10-09 11:01:40.707986865 +0000 UTC m=+0.666826596 container remove 5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06 (image=quay.io/ceph/ceph:v19, name=elated_wright, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:01:40 compute-0 systemd[1]: libpod-conmon-5dcd3de9f538b807674f4e734d142a3b0a77436e1edf89ddfbce37094f130a06.scope: Deactivated successfully.
Oct  9 11:01:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:40 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:40 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:40 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:40 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:40 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:40.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:01:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:01:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 11:01:40 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  9 11:01:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Oct  9 11:01:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.hpolom on compute-0
Oct  9 11:01:40 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.hpolom on compute-0
Oct  9 11:01:40 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  9 11:01:41 compute-0 python3[29450]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 11:01:41 compute-0 podman[29473]: 2025-10-09 11:01:41.061152865 +0000 UTC m=+0.044589039 container create 1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058 (image=quay.io/ceph/ceph:v19, name=pensive_fermat, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 11:01:41 compute-0 systemd[1]: Started libpod-conmon-1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058.scope.
Oct  9 11:01:41 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:41 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:41 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:41.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a657dccdebe06c3e378743a9eb94145d31cdd2dd3a891dd9955d45a0773163a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a657dccdebe06c3e378743a9eb94145d31cdd2dd3a891dd9955d45a0773163a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:41 compute-0 podman[29473]: 2025-10-09 11:01:41.042528128 +0000 UTC m=+0.025964332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:01:41 compute-0 podman[29473]: 2025-10-09 11:01:41.235416625 +0000 UTC m=+0.218852830 container init 1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058 (image=quay.io/ceph/ceph:v19, name=pensive_fermat, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 11:01:41 compute-0 podman[29473]: 2025-10-09 11:01:41.254221447 +0000 UTC m=+0.237657631 container start 1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058 (image=quay.io/ceph/ceph:v19, name=pensive_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:01:41 compute-0 podman[29473]: 2025-10-09 11:01:41.335852012 +0000 UTC m=+0.319288226 container attach 1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058 (image=quay.io/ceph/ceph:v19, name=pensive_fermat, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 11:01:41 compute-0 podman[29643]: 2025-10-09 11:01:41.507797747 +0000 UTC m=+0.025382283 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 11:01:41 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:41 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:41 compute-0 pensive_fermat[29515]: {
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "user_id": "openstack",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "display_name": "openstack",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "email": "",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "suspended": 0,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "max_buckets": 1000,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "subusers": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "keys": [
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        {
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:            "user": "openstack",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:            "access_key": "6MII5IL97292SPADKM04",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:            "secret_key": "vKcaqGv0eERmVD7KYJlp4DAb3g6ttYb4qlqTP3Jo",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:            "active": true,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:            "create_date": "2025-10-09T11:01:41.460792Z"
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        }
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    ],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "swift_keys": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "caps": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "op_mask": "read, write, delete",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "default_placement": "",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "default_storage_class": "",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "placement_tags": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "bucket_quota": {
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "enabled": false,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "check_on_raw": false,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "max_size": -1,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "max_size_kb": 0,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "max_objects": -1
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    },
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "user_quota": {
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "enabled": false,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "check_on_raw": false,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "max_size": -1,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "max_size_kb": 0,
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:        "max_objects": -1
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    },
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "temp_url_keys": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "type": "rgw",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "mfa_ids": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "account_id": "",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "path": "/",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "create_date": "2025-10-09T11:01:41.460436Z",
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "tags": [],
Oct  9 11:01:41 compute-0 pensive_fermat[29515]:    "group_ids": []
Oct  9 11:01:41 compute-0 pensive_fermat[29515]: }
Oct  9 11:01:41 compute-0 pensive_fermat[29515]: 
Oct  9 11:01:41 compute-0 podman[29643]: 2025-10-09 11:01:41.723212385 +0000 UTC m=+0.240796901 container create 2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b (image=quay.io/ceph/keepalived:2.2.4, name=fervent_brattain, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, distribution-scope=public)
Oct  9 11:01:41 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 11:01:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 11:01:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 11:01:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Oct  9 11:01:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 11:01:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 11:01:41 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:41 compute-0 podman[29473]: 2025-10-09 11:01:41.965069001 +0000 UTC m=+0.948505185 container died 1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058 (image=quay.io/ceph/ceph:v19, name=pensive_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 11:01:41 compute-0 systemd[1]: Started libpod-conmon-2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b.scope.
Oct  9 11:01:41 compute-0 systemd[1]: libpod-1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058.scope: Deactivated successfully.
Oct  9 11:01:41 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  9 11:01:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  9 11:01:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:42 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:42 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:42 compute-0 ceph-mon[4705]: Deploying daemon keepalived.rgw.default.compute-0.hpolom on compute-0
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 11:01:42 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 11:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a657dccdebe06c3e378743a9eb94145d31cdd2dd3a891dd9955d45a0773163a3-merged.mount: Deactivated successfully.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 11:01:42 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  9 11:01:42 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.879933357s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.922103882s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.879899979s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.922103882s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.767147064s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809448242s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.879824638s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.922149658s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.767104149s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809448242s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.879805565s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.922149658s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.883038521s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925506592s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766944885s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809448242s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.17( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766887665s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809448242s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766883850s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809448242s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.17( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766869545s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809448242s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766667366s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809371948s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766648293s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809371948s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882740974s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925476074s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882726669s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925476074s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766783714s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809570312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766758919s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809570312s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766691208s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809585571s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766679764s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809585571s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882573128s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925491333s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882558823s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925491333s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882706642s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925704956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766598701s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809600830s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882686615s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925704956s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766586304s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809600830s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766741753s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809829712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766727448s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809829712s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766430855s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809646606s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766406059s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809646606s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.8( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766366959s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809677124s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.8( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766346931s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809677124s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882135391s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925506592s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882200241s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925659180s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.882185936s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925659180s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766123772s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809722900s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766100883s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809722900s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766053200s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809707642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766039848s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809707642s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881943703s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925720215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881927490s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925720215s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766019821s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809829712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.766003609s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809829712s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881782532s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925735474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765890121s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809844971s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881764412s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925735474s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765869141s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809844971s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881723404s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925750732s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881706238s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925750732s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765782356s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809875488s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765771866s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809875488s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881519318s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925765991s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881503105s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925765991s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.4( v 60'1 (0'0,60'1] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881443977s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 active pruub 183.925796509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.4( v 60'1 (0'0,60'1] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881415367s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.925796509s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881287575s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925796509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881269455s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925796509s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765466690s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.810089111s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765448570s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.810089111s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765271187s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809967041s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765254974s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809967041s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881065369s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925811768s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.881049156s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925811768s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765160561s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.809967041s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=55/56 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765142441s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809967041s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765126228s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.810012817s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880976677s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925903320s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880960464s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925903320s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765078545s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.810012817s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880865097s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925903320s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.19( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765069008s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.810134888s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880846977s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925903320s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.19( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.765053749s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.810134888s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880798340s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925918579s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880783081s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925918579s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.768145561s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.813369751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880741119s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925979614s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.768127441s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.813369751s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880725861s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925979614s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.764801025s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.810104370s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.764784813s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.810104370s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880537987s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925949097s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880519867s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925949097s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880458832s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active pruub 183.925949097s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=8.880442619s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.925949097s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.764509201s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.810180664s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.764493942s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.810180664s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.767528534s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active pruub 189.813354492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=55/56 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=14.767511368s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.813354492s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:42 compute-0 podman[29473]: 2025-10-09 11:01:42.387629333 +0000 UTC m=+1.371065517 container remove 1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058 (image=quay.io/ceph/ceph:v19, name=pensive_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.10( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.12( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.6( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.8( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.a( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.c( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.b( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.e( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.1c( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[12.19( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 61 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:42 compute-0 systemd[1]: libpod-conmon-1e808ecb07fafd3336622a8614102207722e2c8f05e142bc1056a141cd22b058.scope: Deactivated successfully.
Oct  9 11:01:42 compute-0 podman[29643]: 2025-10-09 11:01:42.441179888 +0000 UTC m=+0.958764424 container init 2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b (image=quay.io/ceph/keepalived:2.2.4, name=fervent_brattain, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, release=1793, version=2.2.4, name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Oct  9 11:01:42 compute-0 podman[29643]: 2025-10-09 11:01:42.446266201 +0000 UTC m=+0.963850737 container start 2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b (image=quay.io/ceph/keepalived:2.2.4, name=fervent_brattain, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Oct  9 11:01:42 compute-0 fervent_brattain[29660]: 0 0
Oct  9 11:01:42 compute-0 podman[29643]: 2025-10-09 11:01:42.44937403 +0000 UTC m=+0.966958546 container attach 2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b (image=quay.io/ceph/keepalived:2.2.4, name=fervent_brattain, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.buildah.version=1.28.2, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Oct  9 11:01:42 compute-0 systemd[1]: libpod-2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b.scope: Deactivated successfully.
Oct  9 11:01:42 compute-0 conmon[29660]: conmon 2ca0e506da03a581eafe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b.scope/container/memory.events
Oct  9 11:01:42 compute-0 podman[29643]: 2025-10-09 11:01:42.451503478 +0000 UTC m=+0.969087994 container died 2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b (image=quay.io/ceph/keepalived:2.2.4, name=fervent_brattain, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, name=keepalived, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, io.openshift.expose-services=, description=keepalived for Ceph, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  9 11:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-792a26f441fc97f14e8ef70efc5826d3226a58c139b1f4b7aaf934a2791174ad-merged.mount: Deactivated successfully.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:42 compute-0 podman[29643]: 2025-10-09 11:01:42.491046865 +0000 UTC m=+1.008631381 container remove 2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b (image=quay.io/ceph/keepalived:2.2.4, name=fervent_brattain, release=1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20)
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.500887) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702501018, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7004, "num_deletes": 254, "total_data_size": 13443843, "memory_usage": 14099200, "flush_reason": "Manual Compaction"}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct  9 11:01:42 compute-0 systemd[1]: libpod-conmon-2ca0e506da03a581eafe7b39953f7848c95925d87b4aefffcd8fe1166e7f393b.scope: Deactivated successfully.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702546888, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12031286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 143, "largest_seqno": 7142, "table_properties": {"data_size": 12005782, "index_size": 16174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79932, "raw_average_key_size": 24, "raw_value_size": 11942676, "raw_average_value_size": 3631, "num_data_blocks": 710, "num_entries": 3289, "num_filter_entries": 3289, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007438, "oldest_key_time": 1760007438, "file_creation_time": 1760007702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "16685083-1a78-43b3-bfd2-221d12c7d9cc", "db_session_id": "PFLMSQ4A6H5TNSVWO03K", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 46074 microseconds, and 20981 cpu microseconds.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.546971) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12031286 bytes OK
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.546995) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.548316) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.548333) EVENT_LOG_v1 {"time_micros": 1760007702548328, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.548352) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13411997, prev total WAL file size 13411997, number of live WAL files 2.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.551299) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702551394, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12092849, "oldest_snapshot_seqno": -1}
Oct  9 11:01:42 compute-0 systemd[1]: Reloading.
Oct  9 11:01:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3109 keys, 12074762 bytes, temperature: kUnknown
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702614162, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12074762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12049624, "index_size": 16261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 78847, "raw_average_key_size": 25, "raw_value_size": 11988106, "raw_average_value_size": 3855, "num_data_blocks": 714, "num_entries": 3109, "num_filter_entries": 3109, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007436, "oldest_key_time": 0, "file_creation_time": 1760007702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "16685083-1a78-43b3-bfd2-221d12c7d9cc", "db_session_id": "PFLMSQ4A6H5TNSVWO03K", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.614374) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12074762 bytes
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.616808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 192.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.5, 0.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3399, records dropped: 290 output_compression: NoCompression
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.616839) EVENT_LOG_v1 {"time_micros": 1760007702616822, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62828, "compaction_time_cpu_micros": 22705, "output_level": 6, "num_output_files": 1, "total_output_size": 12074762, "num_input_records": 3399, "num_output_records": 3109, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702618637, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702618688, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702618717, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  9 11:01:42 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:01:42.551180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:01:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:42 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:42 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:42 compute-0 systemd[1]: Reloading.
Oct  9 11:01:42 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:42 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:42 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:42 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  9 11:01:42 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  9 11:01:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:42 compute-0 python3[29750]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 11:01:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  9 11:01:42 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  9 11:01:43 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:43 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:43 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:43 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.hpolom for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: [dashboard INFO request] [192.168.122.100:32816] [GET] [200] [0.154s] [6.3K] [316a927f-b315-4b92-8129-b84cf46910d8] /
Oct  9 11:01:43 compute-0 podman[29859]: 2025-10-09 11:01:43.333442302 +0000 UTC m=+0.041336035 container create 76081b5b4d9f0882365474df127cd390b68dd42866339cf6f525a71e6e3d9ade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  9 11:01:43 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:43 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:43 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:43 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 11:01:43 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 11:01:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  9 11:01:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e00d8ac356f767cb4d95747ced57a739b48ae0b3398b9724cc498de82d8d1e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  9 11:01:43 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  9 11:01:43 compute-0 podman[29859]: 2025-10-09 11:01:43.391588844 +0000 UTC m=+0.099482597 container init 76081b5b4d9f0882365474df127cd390b68dd42866339cf6f525a71e6e3d9ade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.10( v 60'61 lc 60'59 (0'0,60'61] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'61 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.14( v 60'57 lc 60'56 (0'0,60'57] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.12( v 60'59 lc 0'0 (0'0,60'59] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'59 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.15( v 60'57 lc 60'56 (0'0,60'57] local-lis/les=61/62 n=1 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.13( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.6( v 56'58 lc 52'44 (0'0,56'58] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.b( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.c( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.e( v 60'59 lc 0'0 (0'0,60'59] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=60'59 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.a( v 56'58 lc 0'0 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.8( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.8( v 39'48 (0'0,39'48] local-lis/les=61/62 n=1 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.2( v 39'48 (0'0,39'48] local-lis/les=61/62 n=1 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.5( v 39'48 (0'0,39'48] local-lis/les=61/62 n=1 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.18( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.19( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.19( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 podman[29859]: 2025-10-09 11:01:43.400067725 +0000 UTC m=+0.107961458 container start 76081b5b4d9f0882365474df127cd390b68dd42866339cf6f525a71e6e3d9ade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, distribution-scope=public, release=1793, name=keepalived)
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[10.1b( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 62 pg[12.1c( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:43 compute-0 bash[29859]: 76081b5b4d9f0882365474df127cd390b68dd42866339cf6f525a71e6e3d9ade
Oct  9 11:01:43 compute-0 podman[29859]: 2025-10-09 11:01:43.314248317 +0000 UTC m=+0.022142080 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 11:01:43 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.hpolom for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Starting VRRP child process, pid=4
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: Startup complete
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:43 2025: (VI_0) Entering BACKUP STATE
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:43 2025: VRRP_Script(check_backend) succeeded
Oct  9 11:01:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:43 compute-0 python3[29868]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 11:01:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 11:01:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.txrqnp on compute-2
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.txrqnp on compute-2
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: [dashboard INFO request] [192.168.122.100:32832] [GET] [200] [0.001s] [6.3K] [96ca5338-fd0e-4cf4-94ec-f3faeba6855a] /
Oct  9 11:01:43 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:43 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:43 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:43 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Oct  9 11:01:43 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 11:01:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj[28016]: Thu Oct  9 11:01:44 2025: (VI_0) Entering MASTER STATE
Oct  9 11:01:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:44 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 11:01:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  9 11:01:44 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 11:01:44 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  9 11:01:44 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  9 11:01:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:44 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:44 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:44 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:44 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:44 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:44 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:44.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:44 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.a scrub starts
Oct  9 11:01:44 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.a scrub ok
Oct  9 11:01:45 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:45 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  9 11:01:45 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  9 11:01:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:01:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:01:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 11:01:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev eb3a3053-b88d-42a2-9b3c-a1ce278ba5f8 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event eb3a3053-b88d-42a2-9b3c-a1ce278ba5f8 (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Oct  9 11:01:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 23 completed events
Oct  9 11:01:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: [progress INFO root] update: starting ev 1cd6a729-35a1-441a-aefc-e50f4bbe7112 (Updating prometheus deployment (+1 -> 1))
Oct  9 11:01:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 9b34178e-56e0-416e-87c8-bfe0879caabe (Global Recovery Event) in 10 seconds
Oct  9 11:01:45 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 11:01:45 compute-0 ceph-mon[4705]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 11:01:45 compute-0 ceph-mon[4705]: Deploying daemon keepalived.rgw.default.compute-2.txrqnp on compute-2
Oct  9 11:01:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 11:01:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:45 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:45 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Oct  9 11:01:45 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:45 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Oct  9 11:01:45 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 11:01:46 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:46 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 11:01:46 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:46 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 11:01:46 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:46 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 11:01:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  9 11:01:46 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 11:01:46 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  9 11:01:46 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.614142418s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.809616089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.614089966s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809616089s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613701820s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.809585571s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613670349s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809585571s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613463402s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.809722900s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613441467s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.809722900s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613382339s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809722900s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613379478s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809722900s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613306999s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.809921265s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613286018s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.809921265s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613134384s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.810043335s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.613111496s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.810043335s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.616297722s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.813354492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.616276741s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.813354492s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.615953445s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 189.813293457s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:46 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 64 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64 pruub=10.615934372s) [2] r=-1 lpr=64 pi=[55,64)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.813293457s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:46 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:46 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:46 compute-0 ceph-mon[4705]: Deploying daemon prometheus.compute-0 on compute-0
Oct  9 11:01:46 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 11:01:46 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:46 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:46 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:46 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:46 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:46 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:46 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:46.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:46 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  9 11:01:46 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  9 11:01:47 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-0-hpolom[29878]: Thu Oct  9 11:01:47 2025: (VI_0) Entering MASTER STATE
Oct  9 11:01:47 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:47 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  9 11:01:47 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  9 11:01:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  9 11:01:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:47 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 65 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:47 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:47 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:47 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 0 keys/s, 3 objects/s recovering
Oct  9 11:01:47 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Oct  9 11:01:47 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 11:01:47 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct  9 11:01:47 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct  9 11:01:48 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 11:01:48 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  9 11:01:48 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:48 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:48 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:48 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:48 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  9 11:01:48 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:48 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  9 11:01:48 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:48.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  9 11:01:48 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  9 11:01:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 11:01:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  9 11:01:49 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 66 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:49 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:49 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:49 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:49.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.23965961 +0000 UTC m=+3.165426059 volume create b5b217d49fc6cd2837066a048a553831b5ea8c48668b611427e7161caa8268e0
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.246773718 +0000 UTC m=+3.172540167 container create a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef (image=quay.io/prometheus/prometheus:v2.51.0, name=inspiring_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 systemd[1]: Started libpod-conmon-a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef.scope.
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.224808775 +0000 UTC m=+3.150575254 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  9 11:01:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a11593a3e82bedbe3fac93e78d2fbd4cb381253ec964ca42055b855967e434/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.331230673 +0000 UTC m=+3.256997142 container init a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef (image=quay.io/prometheus/prometheus:v2.51.0, name=inspiring_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.338047871 +0000 UTC m=+3.263814320 container start a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef (image=quay.io/prometheus/prometheus:v2.51.0, name=inspiring_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 inspiring_euler[30235]: 65534 65534
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.341258954 +0000 UTC m=+3.267025403 container attach a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef (image=quay.io/prometheus/prometheus:v2.51.0, name=inspiring_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 systemd[1]: libpod-a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef.scope: Deactivated successfully.
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.341696698 +0000 UTC m=+3.267463147 container died a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef (image=quay.io/prometheus/prometheus:v2.51.0, name=inspiring_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-09a11593a3e82bedbe3fac93e78d2fbd4cb381253ec964ca42055b855967e434-merged.mount: Deactivated successfully.
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.381057048 +0000 UTC m=+3.306823497 container remove a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef (image=quay.io/prometheus/prometheus:v2.51.0, name=inspiring_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 podman[29977]: 2025-10-09 11:01:49.387580687 +0000 UTC m=+3.313347156 volume remove b5b217d49fc6cd2837066a048a553831b5ea8c48668b611427e7161caa8268e0
Oct  9 11:01:49 compute-0 systemd[1]: libpod-conmon-a2b4c6b008e542ee2a679ec58b2a248f11de8e57ae7c825bb0b575e86098a9ef.scope: Deactivated successfully.
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.453431226 +0000 UTC m=+0.041461409 volume create b7160c7c6135922b1acb3ac6cbfb68d212e76351586b811d3abf53568ac8f989
Oct  9 11:01:49 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:49 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.461739292 +0000 UTC m=+0.049769465 container create 46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141 (image=quay.io/prometheus/prometheus:v2.51.0, name=friendly_mestorf, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 systemd[1]: Started libpod-conmon-46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141.scope.
Oct  9 11:01:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681ecf6ce9b71418bfa81b549d0a2378b4897e6144acfd2536bf529adea6eb9e/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.524720189 +0000 UTC m=+0.112750362 container init 46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141 (image=quay.io/prometheus/prometheus:v2.51.0, name=friendly_mestorf, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.435311256 +0000 UTC m=+0.023341449 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.529517353 +0000 UTC m=+0.117547526 container start 46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141 (image=quay.io/prometheus/prometheus:v2.51.0, name=friendly_mestorf, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 friendly_mestorf[30268]: 65534 65534
Oct  9 11:01:49 compute-0 systemd[1]: libpod-46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141.scope: Deactivated successfully.
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.533009685 +0000 UTC m=+0.121039878 container attach 46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141 (image=quay.io/prometheus/prometheus:v2.51.0, name=friendly_mestorf, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.533396297 +0000 UTC m=+0.121426470 container died 46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141 (image=quay.io/prometheus/prometheus:v2.51.0, name=friendly_mestorf, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-681ecf6ce9b71418bfa81b549d0a2378b4897e6144acfd2536bf529adea6eb9e-merged.mount: Deactivated successfully.
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.576992883 +0000 UTC m=+0.165023056 container remove 46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141 (image=quay.io/prometheus/prometheus:v2.51.0, name=friendly_mestorf, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:49 compute-0 podman[30251]: 2025-10-09 11:01:49.581959472 +0000 UTC m=+0.169989645 volume remove b7160c7c6135922b1acb3ac6cbfb68d212e76351586b811d3abf53568ac8f989
Oct  9 11:01:49 compute-0 systemd[1]: libpod-conmon-46e638027c99615e7e0fc6966d9ea2a68b529c34c03b4b29b7bb517463d93141.scope: Deactivated successfully.
Oct  9 11:01:49 compute-0 systemd[1]: Reloading.
Oct  9 11:01:49 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:49 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:49 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 11:01:49 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 11:01:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:49 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 keys/s, 2 objects/s recovering
Oct  9 11:01:49 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Oct  9 11:01:49 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 11:01:49 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct  9 11:01:49 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct  9 11:01:49 compute-0 systemd[1]: Reloading.
Oct  9 11:01:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 11:01:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.998591423s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.764007568s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.998489380s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.763946533s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.996104240s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.761566162s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.998529434s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.764007568s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.996049881s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.761566162s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.998424530s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.763946533s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995980263s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.761672974s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995930672s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.761672974s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995928764s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.761734009s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995891571s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.761734009s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043578148s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.809494019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043560028s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.809494019s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995518684s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.761550903s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995482445s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.761550903s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995382309s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.761550903s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.995342255s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.761550903s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.997603416s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.763992310s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=65/66 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.997325897s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.763992310s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043219566s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.810241699s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043196678s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.810241699s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.5( v 59'1023 (0'0,59'1023] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043474197s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=56'1021 lcod 58'1022 mlcod 58'1022 active pruub 197.810928345s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.5( v 59'1023 (0'0,59'1023] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043445587s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=56'1021 lcod 58'1022 mlcod 0'0 unknown NOTIFY pruub 197.810928345s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043350220s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 197.811203003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:50 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 67 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=15.043327332s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.811203003s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:50 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:01:50 compute-0 ceph-mgr[4997]: [progress INFO root] Writing back 24 completed events
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 11:01:50 compute-0 podman[30410]: 2025-10-09 11:01:50.420202056 +0000 UTC m=+0.045388565 container create a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee2a5c71265d992ff6ad8771844657c98263e4ba3a1a6e581f68f12306964c/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee2a5c71265d992ff6ad8771844657c98263e4ba3a1a6e581f68f12306964c/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 11:01:50 compute-0 podman[30410]: 2025-10-09 11:01:50.489516375 +0000 UTC m=+0.114702894 container init a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:50 compute-0 podman[30410]: 2025-10-09 11:01:50.399229014 +0000 UTC m=+0.024415533 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  9 11:01:50 compute-0 podman[30410]: 2025-10-09 11:01:50.497634085 +0000 UTC m=+0.122820584 container start a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:01:50 compute-0 bash[30410]: a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04
Oct  9 11:01:50 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.531Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.531Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.531Z caller=main.go:623 level=info host_details="(Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 x86_64 compute-0 (none))"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.531Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.531Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.533Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.534Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.535Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.535Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.538Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.538Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.35µs
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.539Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.539Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.539Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=48.952µs wal_replay_duration=468.035µs wbl_replay_duration=160ns total_replay_duration=1.016893ms
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.541Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.541Z caller=main.go:1153 level=info msg="TSDB started"
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.541Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.575Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=33.593686ms db_storage=1.43µs remote_storage=1.901µs web_handler=630ns query_engine=720ns scrape=3.280235ms scrape_sd=173.175µs notify=18.081µs notify_sd=14.13µs rules=29.276357ms tracing=16.22µs
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.575Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0[30425]: ts=2025-10-09T11:01:50.575Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-mgr[4997]: [progress INFO root] complete: finished ev 1cd6a729-35a1-441a-aefc-e50f4bbe7112 (Updating prometheus deployment (+1 -> 1))
Oct  9 11:01:50 compute-0 ceph-mgr[4997]: [progress INFO root] Completed event 1cd6a729-35a1-441a-aefc-e50f4bbe7112 (Updating prometheus deployment (+1 -> 1)) in 5 seconds
Oct  9 11:01:50 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Oct  9 11:01:50 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:50 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ee0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 11:01:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 11:01:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:50 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct  9 11:01:50 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct  9 11:01:50 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:50 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009ee0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:50 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:50 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  9 11:01:50 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:50.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  9 11:01:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  9 11:01:51 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:51 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:51 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:51.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:51 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  9 11:01:51 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.5( v 59'1023 (0'0,59'1023] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=56'1021 lcod 58'1022 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.5( v 59'1023 (0'0,59'1023] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=56'1021 lcod 58'1022 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:51 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 68 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:01:51 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:51 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:51 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  1: '-n'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  2: 'mgr.compute-0.izrudc'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  3: '-f'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  4: '--setuser'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  5: 'ceph'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  6: '--setgroup'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  7: 'ceph'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr respawn  exe_path /proc/self/exe
Oct  9 11:01:51 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.izrudc(active, since 82s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:01:51 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  9 11:01:51 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct  9 11:01:51 compute-0 systemd[1]: session-20.scope: Consumed 44.236s CPU time.
Oct  9 11:01:51 compute-0 systemd-logind[846]: Session 20 logged out. Waiting for processes to exit.
Oct  9 11:01:51 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  9 11:01:51 compute-0 systemd-logind[846]: Removed session 20.
Oct  9 11:01:51 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setuser ceph since I am not root
Oct  9 11:01:51 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ignoring --setgroup ceph since I am not root
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: pidfile_write: ignore empty --pid-file
Oct  9 11:01:51 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'alerts'
Oct  9 11:01:52 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:52 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  9 11:01:52 compute-0 ceph-mon[4705]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  9 11:01:52 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:52.053+0000 7f65173fa140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'balancer'
Oct  9 11:01:52 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:52.138+0000 7f65173fa140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'cephadm'
Oct  9 11:01:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  9 11:01:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  9 11:01:52 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 69 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 69 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 69 pg[9.5( v 59'1023 (0'0,59'1023] local-lis/les=68/69 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[55,68)/1 crt=59'1023 lcod 58'1022 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 69 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[55,68)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:01:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  9 11:01:52 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  9 11:01:52 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 70 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=4 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70 pruub=15.721966743s) [2] async=[2] r=-1 lpr=70 pi=[55,70)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 200.961975098s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 70 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=4 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70 pruub=15.721919060s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.961975098s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 70 pg[9.5( v 69'1025 (0'0,69'1025] local-lis/les=68/69 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70 pruub=15.724173546s) [2] async=[2] r=-1 lpr=70 pi=[55,70)/1 crt=59'1023 lcod 69'1024 mlcod 69'1024 active pruub 200.965072632s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 70 pg[9.5( v 69'1025 (0'0,69'1025] local-lis/les=68/69 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70 pruub=15.724098206s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=59'1023 lcod 69'1024 mlcod 0'0 unknown NOTIFY pruub 200.965072632s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 70 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=5 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70 pruub=15.723876953s) [2] async=[2] r=-1 lpr=70 pi=[55,70)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 200.965087891s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:52 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 70 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=5 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70 pruub=15.723839760s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.965087891s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:52 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:52 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'crash'
Oct  9 11:01:52 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:52 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:52 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:52 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  9 11:01:52 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:52.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  9 11:01:52 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  9 11:01:52 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  9 11:01:52 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:52.937+0000 7f65173fa140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 11:01:52 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'dashboard'
Oct  9 11:01:53 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:53 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:53 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:53.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'devicehealth'
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:53.570+0000 7f65173fa140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 11:01:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  9 11:01:53 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  9 11:01:53 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  9 11:01:53 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 71 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=71 pruub=14.700611115s) [2] async=[2] r=-1 lpr=71 pi=[55,71)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 200.964935303s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:01:53 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 71 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=68/69 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=71 pruub=14.700544357s) [2] r=-1 lpr=71 pi=[55,71)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.964935303s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:53 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009f00 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]:  from numpy import show_config as show_numpy_config
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:53.779+0000 7f65173fa140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'influx'
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:53.847+0000 7f65173fa140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'insights'
Oct  9 11:01:53 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  9 11:01:53 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'iostat'
Oct  9 11:01:53 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:53.988+0000 7f65173fa140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 11:01:53 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'k8sevents'
Oct  9 11:01:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'localpool'
Oct  9 11:01:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 11:01:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  9 11:01:54 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  9 11:01:54 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  9 11:01:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'mirroring'
Oct  9 11:01:54 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'nfs'
Oct  9 11:01:54 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:54 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:54 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:54 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:54 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:54 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  9 11:01:54 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:54.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  9 11:01:54 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  9 11:01:54 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.027+0000 7f65173fa140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'orchestrator'
Oct  9 11:01:55 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 11:01:55 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:55 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:55 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.251+0000 7f65173fa140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.325+0000 7f65173fa140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'osd_support'
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.389+0000 7f65173fa140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.472+0000 7f65173fa140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'progress'
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.546+0000 7f65173fa140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'prometheus'
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:55 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.888+0000 7f65173fa140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rbd_support'
Oct  9 11:01:55 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  9 11:01:55 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  9 11:01:55 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:55.990+0000 7f65173fa140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 11:01:55 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'restful'
Oct  9 11:01:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rgw'
Oct  9 11:01:56 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd[27655]: [WARNING] 281/110156 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 11:01:56 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:56.422+0000 7f65173fa140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 11:01:56 compute-0 ceph-mgr[4997]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 11:01:56 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'rook'
Oct  9 11:01:56 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:56 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009f20 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:56 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:56 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:56 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:56 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:56 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:56 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  9 11:01:56 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.005+0000 7f65173fa140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'selftest'
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.081+0000 7f65173fa140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'snap_schedule'
Oct  9 11:01:57 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:57 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:57 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:57.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.165+0000 7f65173fa140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'stats'
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'status'
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.321+0000 7f65173fa140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telegraf'
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.391+0000 7f65173fa140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'telemetry'
Oct  9 11:01:57 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.549+0000 7f65173fa140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:57 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0036e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:57 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:57.772+0000 7f65173fa140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 11:01:57 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'volumes'
Oct  9 11:01:57 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct  9 11:01:57 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.049+0000 7f65173fa140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr[py] Loading python module 'zabbix'
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.119+0000 7f65173fa140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Active manager daemon compute-0.izrudc restarted
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.izrudc
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: ms_deliver_dispatch: unhandled message 0x5624728ad860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr handle_mgr_map Activating!
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr handle_mgr_map I am now activating
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.izrudc(active, starting, since 0.16797s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.aesial"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.aesial"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e9 all = 0
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.yzkqil"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.yzkqil"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e9 all = 0
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.brbiqj"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.brbiqj"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e9 all = 0
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).mds e9 all = 1
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: balancer
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [balancer INFO root] Starting
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [INF] : Manager daemon compute-0.izrudc is now available
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [balancer INFO root] Optimize plan auto_2025-10-09_11:01:58
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: cephadm
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: crash
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: dashboard
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: devicehealth
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO sso] Loading SSO DB version=1
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: iostat
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Starting
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: nfs
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: orchestrator
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: pg_autoscaler
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: progress
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv restarted
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.agiurv started
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [progress INFO root] Loading...
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f6499f3b9a0>, <progress.module.GhostEvent object at 0x7f6499f3bc10>, <progress.module.GhostEvent object at 0x7f6499f3bc40>, <progress.module.GhostEvent object at 0x7f6499f3bc70>, <progress.module.GhostEvent object at 0x7f6499f3bca0>, <progress.module.GhostEvent object at 0x7f6499f3bcd0>, <progress.module.GhostEvent object at 0x7f6499f3bd00>, <progress.module.GhostEvent object at 0x7f6499f3bd30>, <progress.module.GhostEvent object at 0x7f6499f3bd60>, <progress.module.GhostEvent object at 0x7f6499f3bd90>, <progress.module.GhostEvent object at 0x7f6499f3bdc0>, <progress.module.GhostEvent object at 0x7f6499f3bdf0>, <progress.module.GhostEvent object at 0x7f6499f3be20>, <progress.module.GhostEvent object at 0x7f6499f3be50>, <progress.module.GhostEvent object at 0x7f6499f3be80>, <progress.module.GhostEvent object at 0x7f6499f3beb0>, <progress.module.GhostEvent object at 0x7f6499f3bee0>, <progress.module.GhostEvent object at 0x7f6499f3bf10>, <progress.module.GhostEvent object at 0x7f6499f3bf40>, <progress.module.GhostEvent object at 0x7f6499f3bf70>, <progress.module.GhostEvent object at 0x7f6499f3bfa0>, <progress.module.GhostEvent object at 0x7f6499f3bfd0>, <progress.module.GhostEvent object at 0x7f6499f51040>, <progress.module.GhostEvent object at 0x7f6499f51070>] historic events
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: prometheus
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO root] Cache enabled
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO root] starting metric collection thread
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] recovery thread starting
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] starting setup
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:01:58] ENGINE Bus STARTING
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO root] Starting engine...
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: CherryPy Checker:
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: The Application mounted at '' has an empty config.
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:01:58] ENGINE Bus STARTING
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: rbd_support
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: restful
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: status
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: telemetry
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [restful WARNING root] server not running: no certificate configured
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] PerfHandler: starting
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: mgr load Constructed class from module: volumes
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.487+0000 7f648434a640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.488+0000 7f6481344640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.488+0000 7f6481344640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.488+0000 7f6481344640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.488+0000 7f6481344640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: 2025-10-09T11:01:58.488+0000 7f6481344640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: client.0 error registering admin socket command: (17) File exists
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TaskHandler: starting
Oct  9 11:01:58 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"} v 0)
Oct  9 11:01:58 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [rbd_support INFO root] setup complete
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:01:58] ENGINE Serving on http://:::9283
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:01:58] ENGINE Bus STARTED
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:01:58] ENGINE Serving on http://:::9283
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:01:58] ENGINE Bus STARTED
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [prometheus INFO root] Engine started.
Oct  9 11:01:58 compute-0 systemd-logind[846]: New session 22 of user ceph-admin.
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  9 11:01:58 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:58 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:58 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:58 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a30009f40 fd 15 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:58 compute-0 ceph-mgr[4997]: [dashboard INFO dashboard.module] Engine started.
Oct  9 11:01:58 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:58 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:58 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:58 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Oct  9 11:01:59 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Oct  9 11:01:59 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm restarted
Oct  9 11:01:59 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.rtiqvm started
Oct  9 11:01:59 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:01:59 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:01:59 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:59.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:01:59 compute-0 ceph-mon[4705]: Active manager daemon compute-0.izrudc restarted
Oct  9 11:01:59 compute-0 ceph-mon[4705]: Activating manager daemon compute-0.izrudc
Oct  9 11:01:59 compute-0 ceph-mon[4705]: Manager daemon compute-0.izrudc is now available
Oct  9 11:01:59 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:01:59 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct  9 11:01:59 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct  9 11:01:59 compute-0 podman[30777]: 2025-10-09 11:01:59.433330569 +0000 UTC m=+0.052176312 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 11:01:59 compute-0 podman[30777]: 2025-10-09 11:01:59.520243722 +0000 UTC m=+0.139089445 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 11:01:59 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.izrudc(active, since 1.42084s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:01:59] ENGINE Bus STARTING
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:01:59] ENGINE Bus STARTING
Oct  9 11:01:59 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:01:59 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:01:59] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:01:59] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:01:59] ENGINE Client ('192.168.122.100', 42036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:01:59] ENGINE Client ('192.168.122.100', 42036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:01:59] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:01:59] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: [cephadm INFO cherrypy.error] [09/Oct/2025:11:01:59] ENGINE Bus STARTED
Oct  9 11:01:59 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : [09/Oct/2025:11:01:59] ENGINE Bus STARTED
Oct  9 11:01:59 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  9 11:01:59 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  9 11:02:00 compute-0 podman[30939]: 2025-10-09 11:02:00.002720993 +0000 UTC m=+0.050330183 container exec 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:00 compute-0 podman[30963]: 2025-10-09 11:02:00.064117139 +0000 UTC m=+0.048695491 container exec_died 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:00 compute-0 podman[30939]: 2025-10-09 11:02:00.073357105 +0000 UTC m=+0.120966265 container exec_died 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:00 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Oct  9 11:02:00 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 11:02:00 compute-0 podman[31013]: 2025-10-09 11:02:00.315540801 +0000 UTC m=+0.060363134 container exec ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 11:02:00 compute-0 podman[31013]: 2025-10-09 11:02:00.327215245 +0000 UTC m=+0.072037568 container exec_died ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 11:02:00 compute-0 podman[31076]: 2025-10-09 11:02:00.511474445 +0000 UTC m=+0.046211550 container exec 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:02:00 compute-0 ceph-mon[4705]: [09/Oct/2025:11:01:59] ENGINE Bus STARTING
Oct  9 11:02:00 compute-0 ceph-mon[4705]: [09/Oct/2025:11:01:59] ENGINE Serving on https://192.168.122.100:7150
Oct  9 11:02:00 compute-0 ceph-mon[4705]: [09/Oct/2025:11:01:59] ENGINE Client ('192.168.122.100', 42036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 11:02:00 compute-0 ceph-mon[4705]: [09/Oct/2025:11:01:59] ENGINE Serving on http://192.168.122.100:8765
Oct  9 11:02:00 compute-0 ceph-mon[4705]: [09/Oct/2025:11:01:59] ENGINE Bus STARTED
Oct  9 11:02:00 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 11:02:00 compute-0 podman[31076]: 2025-10-09 11:02:00.517722635 +0000 UTC m=+0.052459720 container exec_died 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:02:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  9 11:02:00 compute-0 podman[31138]: 2025-10-09 11:02:00.694052862 +0000 UTC m=+0.046305084 container exec f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  9 11:02:00 compute-0 podman[31138]: 2025-10-09 11:02:00.708259257 +0000 UTC m=+0.060511449 container exec_died f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, version=2.2.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git)
Oct  9 11:02:00 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:00] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct  9 11:02:00 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.access.140070047889680] ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:00] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct  9 11:02:00 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:00 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0036e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:00 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 11:02:00 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  9 11:02:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.284728050s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 205.809829712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.284683228s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.809829712s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.284582138s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 205.810028076s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.284566879s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.810028076s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.284933090s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 205.810607910s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.284921646s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.810607910s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.285125732s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 205.811187744s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:00 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 74 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=74 pruub=12.285110474s) [1] r=-1 lpr=74 pi=[55,74)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.811187744s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:00 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:00 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:00 compute-0 podman[31207]: 2025-10-09 11:02:00.903091326 +0000 UTC m=+0.056480630 container exec e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:00 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:00 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:00 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:00.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:00 compute-0 podman[31207]: 2025-10-09 11:02:00.937405775 +0000 UTC m=+0.090795059 container exec_died e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:00 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.izrudc(active, since 2s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:02:00 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  9 11:02:00 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  9 11:02:00 compute-0 ceph-mgr[4997]: [devicehealth INFO root] Check health
Oct  9 11:02:01 compute-0 podman[31291]: 2025-10-09 11:02:01.131304265 +0000 UTC m=+0.045828439 container exec 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:01 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:01 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  9 11:02:01 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:01.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  9 11:02:01 compute-0 podman[31291]: 2025-10-09 11:02:01.286463813 +0000 UTC m=+0.200987997 container exec_died 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 11:02:01 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 podman[31401]: 2025-10-09 11:02:01.640422918 +0000 UTC m=+0.071050856 container exec a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:01 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:01 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a20000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:01 compute-0 podman[31401]: 2025-10-09 11:02:01.674461088 +0000 UTC m=+0.105089016 container exec_died a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  9 11:02:01 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  9 11:02:01 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:01 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 75 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:01 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct  9 11:02:01 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:02:02 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 11:02:02 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:02 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:02 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:02 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0036e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  9 11:02:02 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:02 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  9 11:02:02 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:02 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:02 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:02.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.izrudc(active, since 4s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:02 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 76 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:02 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 76 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:02 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 76 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:02 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 76 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=4 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[55,75)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:02 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 11:02:02 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 11:02:03 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 11:02:03 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 11:02:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:03 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 11:02:03 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:02:03 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:03 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:03 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:03.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:03 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:03 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:03 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:03 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct  9 11:02:03 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct  9 11:02:03 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 11:02:04 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:04 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct  9 11:02:04 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:04 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:04 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a20001e60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:04 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:04 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:04 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Oct  9 11:02:04 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Oct  9 11:02:04 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:04 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:04 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:04.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:04 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  9 11:02:04 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=4 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.992557526s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 211.626113892s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.16( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=4 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.992511749s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.626113892s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=5 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.992296219s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 211.626083374s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.e( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=5 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.992270470s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.626083374s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=6 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.991678238s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 211.626068115s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.6( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=6 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.991625786s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.626068115s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=5 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.987586021s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 211.622177124s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:04 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 77 pg[9.1e( v 42'1020 (0'0,42'1020] local-lis/les=75/76 n=5 ec=55/36 lis/c=75/55 les/c/f=76/56/0 sis=77 pruub=13.987560272s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.622177124s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:02:05 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:05 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:05 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:05.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:05 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:05 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:05 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 11:02:05 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:05 compute-0 ceph-mon[4705]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct  9 11:02:05 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:02:05 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  9 11:02:05 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  9 11:02:05 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  9 11:02:06 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 4 active+remapped, 349 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 9 op/s; 40 B/s, 3 objects/s recovering
Oct  9 11:02:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct  9 11:02:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 11:02:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:06 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:06 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:06 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:06 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a20001e60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 11:02:06 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  9 11:02:06 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:06 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  9 11:02:06 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:06 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:06 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:06 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 78 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=78 pruub=14.221431732s) [2] r=-1 lpr=78 pi=[55,78)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 213.810256958s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:06 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 78 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=78 pruub=14.221203804s) [2] r=-1 lpr=78 pi=[55,78)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.810256958s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:06 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 78 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=78 pruub=14.221438408s) [2] r=-1 lpr=78 pi=[55,78)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 213.811279297s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:06 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 78 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=78 pruub=14.221388817s) [2] r=-1 lpr=78 pi=[55,78)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.811279297s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:06 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  9 11:02:06 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  9 11:02:07 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:07 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:07 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:07.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:07 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:07 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:07 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 11:02:07 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:07 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:07 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:07 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0036e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:07 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  9 11:02:07 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct  9 11:02:07 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct  9 11:02:08 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 4 active+remapped, 349 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 8 op/s; 36 B/s, 3 objects/s recovering
Oct  9 11:02:08 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Oct  9 11:02:08 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 11:02:08 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:08 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:08 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:08 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:08 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:08 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:08 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:08 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.b scrub starts
Oct  9 11:02:09 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.b scrub ok
Oct  9 11:02:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:09 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:09 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:09 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:09.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 11:02:09 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.581665) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729581743, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1116, "num_deletes": 251, "total_data_size": 3330059, "memory_usage": 3630584, "flush_reason": "Manual Compaction"}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  9 11:02:09 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  9 11:02:09 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 11:02:09 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:09 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:09 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 11:02:09 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729595563, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3180379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7143, "largest_seqno": 8258, "table_properties": {"data_size": 3174489, "index_size": 2961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16143, "raw_average_key_size": 21, "raw_value_size": 3161283, "raw_average_value_size": 4301, "num_data_blocks": 130, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007703, "oldest_key_time": 1760007703, "file_creation_time": 1760007729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "16685083-1a78-43b3-bfd2-221d12c7d9cc", "db_session_id": "PFLMSQ4A6H5TNSVWO03K", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13934 microseconds, and 6870 cpu microseconds.
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.595607) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3180379 bytes OK
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.595626) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.597985) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.598006) EVENT_LOG_v1 {"time_micros": 1760007729597999, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.598024) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3323964, prev total WAL file size 3324294, number of live WAL files 2.
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.598877) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3105KB)], [20(11MB)]
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729599001, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 15255141, "oldest_snapshot_seqno": -1}
Oct  9 11:02:09 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:09 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a20001e60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3312 keys, 13913218 bytes, temperature: kUnknown
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729661581, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13913218, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13886730, "index_size": 17098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 85607, "raw_average_key_size": 25, "raw_value_size": 13821373, "raw_average_value_size": 4173, "num_data_blocks": 745, "num_entries": 3312, "num_filter_entries": 3312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007436, "oldest_key_time": 0, "file_creation_time": 1760007729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "16685083-1a78-43b3-bfd2-221d12c7d9cc", "db_session_id": "PFLMSQ4A6H5TNSVWO03K", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.661816) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13913218 bytes
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.663762) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.4 rd, 222.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(9.2) write-amplify(4.4) OK, records in: 3844, records dropped: 532 output_compression: NoCompression
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.663787) EVENT_LOG_v1 {"time_micros": 1760007729663775, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62668, "compaction_time_cpu_micros": 24854, "output_level": 6, "num_output_files": 1, "total_output_size": 13913218, "num_input_records": 3844, "num_output_records": 3312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729664568, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729666690, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.598774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.666803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.666809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.666811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.666812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:02:09 compute-0 ceph-mon[4705]: rocksdb: (Original Log Time 2025/10/09-11:02:09.666813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 11:02:10 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.c scrub starts
Oct  9 11:02:10 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.c scrub ok
Oct  9 11:02:10 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Oct  9 11:02:10 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  9 11:02:10 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:10] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct  9 11:02:10 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.access.140070047889680] ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:10] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Oct  9 11:02:10 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:10 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0036e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:10 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:10 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:10 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:10 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:10 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:11 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Oct  9 11:02:11 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Oct  9 11:02:11 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:11 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:11 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:11.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:11 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:11 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:11 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:11 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 11:02:12 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  9 11:02:12 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  9 11:02:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 11:02:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  9 11:02:12 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:12 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 11:02:12 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80 pruub=9.067049026s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 213.810150146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=0 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80 pruub=9.067018509s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.810150146s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=0 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=0 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=0 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80 pruub=9.067186356s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 213.811309814s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:12 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 80 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80 pruub=9.067147255s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.811309814s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:12 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:12 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:12 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a20001e60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:12 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:12 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc0036e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:12 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:12 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:12 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 11:02:12 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 11:02:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 11:02:12 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 11:02:12 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:12 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:13 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct  9 11:02:13 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  9 11:02:13 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct  9 11:02:13 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:13 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:13 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:13.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:13 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 11:02:13 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.479565993 +0000 UTC m=+0.037914153 container create c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 11:02:13 compute-0 systemd[1]: Started libpod-conmon-c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70.scope.
Oct  9 11:02:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.462672248 +0000 UTC m=+0.021020418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.560359337 +0000 UTC m=+0.118707597 container init c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.572215009 +0000 UTC m=+0.130563169 container start c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.575609459 +0000 UTC m=+0.133957619 container attach c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_kirch, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:02:13 compute-0 wonderful_kirch[32608]: 167 167
Oct  9 11:02:13 compute-0 systemd[1]: libpod-c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70.scope: Deactivated successfully.
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.57781588 +0000 UTC m=+0.136164040 container died c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 11:02:13 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:13 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 11:02:13 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:13 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 11:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a663d598a3e0fdbf964fc9297f27944dd02a77e8f3b381a0a419f705a4fc36d6-merged.mount: Deactivated successfully.
Oct  9 11:02:13 compute-0 podman[32591]: 2025-10-09 11:02:13.652263199 +0000 UTC m=+0.210611369 container remove c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_kirch, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Oct  9 11:02:13 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:13 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:13 compute-0 systemd[1]: libpod-conmon-c514a8bcc792b46477a6d0b1b6cd573975962dcfc278ec00b19a95448fac0e70.scope: Deactivated successfully.
Oct  9 11:02:13 compute-0 podman[32632]: 2025-10-09 11:02:13.793913865 +0000 UTC m=+0.035096572 container create 8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dewdney, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 11:02:13 compute-0 systemd[1]: Started libpod-conmon-8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0.scope.
Oct  9 11:02:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28aa09cf0f0b537077be5a444454983fb4dfd72f4c14fd2349dba8709bc0f59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28aa09cf0f0b537077be5a444454983fb4dfd72f4c14fd2349dba8709bc0f59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28aa09cf0f0b537077be5a444454983fb4dfd72f4c14fd2349dba8709bc0f59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28aa09cf0f0b537077be5a444454983fb4dfd72f4c14fd2349dba8709bc0f59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28aa09cf0f0b537077be5a444454983fb4dfd72f4c14fd2349dba8709bc0f59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:13 compute-0 podman[32632]: 2025-10-09 11:02:13.872360584 +0000 UTC m=+0.113543291 container init 8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 11:02:13 compute-0 podman[32632]: 2025-10-09 11:02:13.778439336 +0000 UTC m=+0.019622073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:13 compute-0 podman[32632]: 2025-10-09 11:02:13.879119671 +0000 UTC m=+0.120302378 container start 8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dewdney, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:02:13 compute-0 podman[32632]: 2025-10-09 11:02:13.88248459 +0000 UTC m=+0.123667297 container attach 8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dewdney, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 11:02:14 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  9 11:02:14 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  9 11:02:14 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  9 11:02:14 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  9 11:02:14 compute-0 elegant_dewdney[32649]: --> passed data devices: 0 physical, 1 LVM
Oct  9 11:02:14 compute-0 elegant_dewdney[32649]: --> All data devices are unavailable
Oct  9 11:02:14 compute-0 systemd[1]: libpod-8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0.scope: Deactivated successfully.
Oct  9 11:02:14 compute-0 podman[32632]: 2025-10-09 11:02:14.21691641 +0000 UTC m=+0.458099117 container died 8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dewdney, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 11:02:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28aa09cf0f0b537077be5a444454983fb4dfd72f4c14fd2349dba8709bc0f59-merged.mount: Deactivated successfully.
Oct  9 11:02:14 compute-0 podman[32632]: 2025-10-09 11:02:14.257032722 +0000 UTC m=+0.498215429 container remove 8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:02:14 compute-0 systemd[1]: libpod-conmon-8fcb4e5f0d62aee4a0ce7f1fe910c8917485adeee1b0305c1a2ff9cff27043a0.scope: Deactivated successfully.
Oct  9 11:02:14 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:14 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 81 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=80/81 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:14 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 81 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=80/81 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[55,80)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.76112053 +0000 UTC m=+0.034831374 container create 627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 11:02:14 compute-0 systemd[1]: Started libpod-conmon-627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773.scope.
Oct  9 11:02:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.816280418 +0000 UTC m=+0.089991282 container init 627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 11:02:14 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:14 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.823225791 +0000 UTC m=+0.096936625 container start 627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:14 compute-0 nice_ishizaka[32783]: 167 167
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.827063766 +0000 UTC m=+0.100774630 container attach 627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:02:14 compute-0 systemd[1]: libpod-627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773.scope: Deactivated successfully.
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.827545151 +0000 UTC m=+0.101255995 container died 627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.745684712 +0000 UTC m=+0.019395576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-aee334e77b2188fc1b89467c3f4a3ea41578236b06f4b4b4fddd8af7ffa9f823-merged.mount: Deactivated successfully.
Oct  9 11:02:14 compute-0 podman[32766]: 2025-10-09 11:02:14.858588631 +0000 UTC m=+0.132299475 container remove 627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 11:02:14 compute-0 systemd[1]: libpod-conmon-627638cb95cac0e4616b207f15f74defb7bff9642e561528517d187f9c7a3773.scope: Deactivated successfully.
Oct  9 11:02:14 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:14 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200039b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:14 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:14 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:14 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:14.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:15.00810875 +0000 UTC m=+0.036638532 container create 5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 11:02:15 compute-0 systemd[1]: Started libpod-conmon-5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7.scope.
Oct  9 11:02:15 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8125011ebdc6aa600a1bf9bb1ac95baf818b45cfb1ae20dba011f6a15f843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8125011ebdc6aa600a1bf9bb1ac95baf818b45cfb1ae20dba011f6a15f843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8125011ebdc6aa600a1bf9bb1ac95baf818b45cfb1ae20dba011f6a15f843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8125011ebdc6aa600a1bf9bb1ac95baf818b45cfb1ae20dba011f6a15f843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:14.991880988 +0000 UTC m=+0.020410800 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:15.095115585 +0000 UTC m=+0.123645367 container init 5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:15.106635146 +0000 UTC m=+0.135164948 container start 5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:15.109940593 +0000 UTC m=+0.138470375 container attach 5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 11:02:15 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:15 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:15 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]: {
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:    "0": [
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:        {
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "devices": [
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "/dev/loop3"
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            ],
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "lv_name": "ceph_lv0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "lv_size": "21470642176",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e990987d-9393-5e96-99ae-9e3a3319f191,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0ea02d81-16d9-4b32-9888-cc7ebc83243e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "lv_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "name": "ceph_lv0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "tags": {
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.block_uuid": "FE1gnZ-I1My-j70A-zUNv-ZgvE-9KmG-TUifre",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.cephx_lockbox_secret": "",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.cluster_fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.cluster_name": "ceph",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.crush_device_class": "",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.encrypted": "0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.osd_fsid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.osd_id": "0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.type": "block",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.vdo": "0",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:                "ceph.with_tpm": "0"
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            },
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "type": "block",
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:            "vg_name": "ceph_vg0"
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:        }
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]:    ]
Oct  9 11:02:15 compute-0 ecstatic_goodall[32824]: }
Oct  9 11:02:15 compute-0 systemd[1]: libpod-5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7.scope: Deactivated successfully.
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:15.397299034 +0000 UTC m=+0.425828826 container died 5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goodall, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 11:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bce8125011ebdc6aa600a1bf9bb1ac95baf818b45cfb1ae20dba011f6a15f843-merged.mount: Deactivated successfully.
Oct  9 11:02:15 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  9 11:02:15 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  9 11:02:15 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 82 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=0 lpr=82 pi=[55,82)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:15 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 82 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=0 lpr=82 pi=[55,82)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:15 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 82 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=0 lpr=82 pi=[55,82)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:15 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 82 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=0 lpr=82 pi=[55,82)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:15 compute-0 podman[32808]: 2025-10-09 11:02:15.446329734 +0000 UTC m=+0.474859516 container remove 5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 11:02:15 compute-0 systemd[1]: libpod-conmon-5ca518f8321564552eb76310f0ea52a1cbebd7576b0228f9ffb62623499fb1a7.scope: Deactivated successfully.
Oct  9 11:02:15 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:15 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc003700 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:16.000213837 +0000 UTC m=+0.037257102 container create fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:02:16 compute-0 systemd[1]: Started libpod-conmon-fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e.scope.
Oct  9 11:02:16 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:15.983032013 +0000 UTC m=+0.020075268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:16.083722239 +0000 UTC m=+0.120765504 container init fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:16.090817887 +0000 UTC m=+0.127861132 container start fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:16 compute-0 adoring_ellis[32952]: 167 167
Oct  9 11:02:16 compute-0 systemd[1]: libpod-fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e.scope: Deactivated successfully.
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:16.097612526 +0000 UTC m=+0.134655771 container attach fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:16.097923806 +0000 UTC m=+0.134967041 container died fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 11:02:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-da4f77cb98e02a47ff1a7b170cba6b4c12999e1cc6c39034e5b410e118e69460-merged.mount: Deactivated successfully.
Oct  9 11:02:16 compute-0 podman[32935]: 2025-10-09 11:02:16.129159073 +0000 UTC m=+0.166202318 container remove fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Oct  9 11:02:16 compute-0 systemd[1]: libpod-conmon-fddff42b9b515b4a4643530ef50c0b5cfdb006d0d3d0f287bd2c2bf828ea165e.scope: Deactivated successfully.
Oct  9 11:02:16 compute-0 podman[32978]: 2025-10-09 11:02:16.269045632 +0000 UTC m=+0.038809883 container create f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_elgamal, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Oct  9 11:02:16 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 2 activating+remapped, 2 unknown, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s; 11/204 objects misplaced (5.392%)
Oct  9 11:02:16 compute-0 systemd[1]: Started libpod-conmon-f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1.scope.
Oct  9 11:02:16 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34eb5599d55cf861bb0062adc2fb61400fbede0e915c96e9422eeaa7e53e0d46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34eb5599d55cf861bb0062adc2fb61400fbede0e915c96e9422eeaa7e53e0d46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34eb5599d55cf861bb0062adc2fb61400fbede0e915c96e9422eeaa7e53e0d46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34eb5599d55cf861bb0062adc2fb61400fbede0e915c96e9422eeaa7e53e0d46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:16 compute-0 podman[32978]: 2025-10-09 11:02:16.335551845 +0000 UTC m=+0.105316116 container init f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:16 compute-0 podman[32978]: 2025-10-09 11:02:16.343421509 +0000 UTC m=+0.113185760 container start f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_elgamal, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:02:16 compute-0 podman[32978]: 2025-10-09 11:02:16.346180778 +0000 UTC m=+0.115945049 container attach f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_elgamal, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:02:16 compute-0 podman[32978]: 2025-10-09 11:02:16.25222054 +0000 UTC m=+0.021984811 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:16 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  9 11:02:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:16 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:16 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:16 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:16 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:16 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:16 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:16 compute-0 lvm[33069]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 11:02:16 compute-0 lvm[33069]: VG ceph_vg0 finished
Oct  9 11:02:17 compute-0 happy_elgamal[32994]: {}
Oct  9 11:02:17 compute-0 systemd[1]: libpod-f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1.scope: Deactivated successfully.
Oct  9 11:02:17 compute-0 systemd[1]: libpod-f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1.scope: Consumed 1.060s CPU time.
Oct  9 11:02:17 compute-0 podman[32978]: 2025-10-09 11:02:17.027849989 +0000 UTC m=+0.797614260 container died f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  9 11:02:17 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 83 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=80/81 n=5 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=83 pruub=13.632905960s) [2] async=[2] r=-1 lpr=83 pi=[55,83)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 223.336669922s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:17 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 83 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=80/81 n=5 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=83 pruub=13.632709503s) [2] r=-1 lpr=83 pi=[55,83)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.336669922s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-34eb5599d55cf861bb0062adc2fb61400fbede0e915c96e9422eeaa7e53e0d46-merged.mount: Deactivated successfully.
Oct  9 11:02:17 compute-0 podman[32978]: 2025-10-09 11:02:17.074513683 +0000 UTC m=+0.844277934 container remove f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:17 compute-0 systemd[1]: libpod-conmon-f0baac4bfc8207e4546c44b6af97f6bc658b48a1afc91fbe276248ca4bb18bd1.scope: Deactivated successfully.
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:17 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:17 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:17 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  9 11:02:17 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 83 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=82/83 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[55,82)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:17 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 83 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=82/83 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[55,82)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:17 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 11:02:17 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:17 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 11:02:17 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  9 11:02:17 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  9 11:02:17 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  9 11:02:17 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 84 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=80/81 n=6 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=84 pruub=13.126420021s) [2] async=[2] r=-1 lpr=84 pi=[55,84)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 223.336639404s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:17 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 84 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=80/81 n=6 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=84 pruub=13.126232147s) [2] r=-1 lpr=84 pi=[55,84)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.336639404s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:17 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:17 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200039b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:17 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:17 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:17 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:17 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 11:02:17 compute-0 podman[33201]: 2025-10-09 11:02:17.902999925 +0000 UTC m=+0.037622554 container create 519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a (image=quay.io/ceph/ceph:v19, name=peaceful_murdock, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:02:17 compute-0 systemd[1]: Started libpod-conmon-519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a.scope.
Oct  9 11:02:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:17 compute-0 podman[33201]: 2025-10-09 11:02:17.973317142 +0000 UTC m=+0.107939791 container init 519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a (image=quay.io/ceph/ceph:v19, name=peaceful_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 11:02:17 compute-0 podman[33201]: 2025-10-09 11:02:17.980020667 +0000 UTC m=+0.114643296 container start 519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a (image=quay.io/ceph/ceph:v19, name=peaceful_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 11:02:17 compute-0 podman[33201]: 2025-10-09 11:02:17.982577611 +0000 UTC m=+0.117200240 container attach 519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a (image=quay.io/ceph/ceph:v19, name=peaceful_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:02:17 compute-0 podman[33201]: 2025-10-09 11:02:17.888633622 +0000 UTC m=+0.023256271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:02:17 compute-0 peaceful_murdock[33218]: 167 167
Oct  9 11:02:17 compute-0 systemd[1]: libpod-519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a.scope: Deactivated successfully.
Oct  9 11:02:17 compute-0 podman[33201]: 2025-10-09 11:02:17.985834365 +0000 UTC m=+0.120456994 container died 519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a (image=quay.io/ceph/ceph:v19, name=peaceful_murdock, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 11:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f32570fa6cc2b624a6d170e4b715489bd4c67f3f2cb79ce28796bad4d1357a9-merged.mount: Deactivated successfully.
Oct  9 11:02:18 compute-0 podman[33201]: 2025-10-09 11:02:18.023229631 +0000 UTC m=+0.157852260 container remove 519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a (image=quay.io/ceph/ceph:v19, name=peaceful_murdock, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  9 11:02:18 compute-0 systemd[1]: libpod-conmon-519713ac0e3fdbcc06458670b4100f0dac07e90f2f2e959612e819397d039c9a.scope: Deactivated successfully.
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:18 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.izrudc (monmap changed)...
Oct  9 11:02:18 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.izrudc (monmap changed)...
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:18 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct  9 11:02:18 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct  9 11:02:18 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 2 activating+remapped, 2 unknown, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 11 op/s; 11/204 objects misplaced (5.392%)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.642284444 +0000 UTC m=+0.039638949 container create af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a (image=quay.io/ceph/ceph:v19, name=modest_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 11:02:18 compute-0 systemd[1]: Started libpod-conmon-af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a.scope.
Oct  9 11:02:18 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  9 11:02:18 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 85 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=82/83 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85 pruub=14.558281898s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 225.940811157s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:18 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 85 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=82/83 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85 pruub=14.558126450s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 225.940811157s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:18 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 85 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=82/83 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85 pruub=14.557896614s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 225.940811157s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:18 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 85 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=82/83 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85 pruub=14.557846069s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 225.940811157s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:18 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.626849195 +0000 UTC m=+0.024203720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.726624792 +0000 UTC m=+0.123979377 container init af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a (image=quay.io/ceph/ceph:v19, name=modest_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.732883704 +0000 UTC m=+0.130238209 container start af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a (image=quay.io/ceph/ceph:v19, name=modest_kilby, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.735934271 +0000 UTC m=+0.133288796 container attach af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a (image=quay.io/ceph/ceph:v19, name=modest_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:02:18 compute-0 modest_kilby[33319]: 167 167
Oct  9 11:02:18 compute-0 systemd[1]: libpod-af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a.scope: Deactivated successfully.
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.737338827 +0000 UTC m=+0.134693342 container died af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a (image=quay.io/ceph/ceph:v19, name=modest_kilby, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Oct  9 11:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-adcb19ffb95ab9f529ffcabc570ffbc7397973626fea17b17df1074508c935b7-merged.mount: Deactivated successfully.
Oct  9 11:02:18 compute-0 podman[33302]: 2025-10-09 11:02:18.7746619 +0000 UTC m=+0.172016405 container remove af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a (image=quay.io/ceph/ceph:v19, name=modest_kilby, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 11:02:18 compute-0 systemd[1]: libpod-conmon-af7e285b378b74a6f3fb27b8ed9c52789e1d8f85800dfdc157c259284692ec8a.scope: Deactivated successfully.
Oct  9 11:02:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:18 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc003720 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:18 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:18 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:18 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a080019c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:18 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:18 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:18 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:19 compute-0 ceph-mon[4705]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 11:02:19 compute-0 ceph-mon[4705]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 11:02:19 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:19 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:19 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 11:02:19 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:19 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:19 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:19 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:19 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:19 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:19 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 11:02:19 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 11:02:19 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 11:02:19 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 11:02:19 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:19 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:19 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 11:02:19 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 11:02:19 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:19 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:20.002209645 +0000 UTC m=+0.045112365 container create a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gould, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Oct  9 11:02:20 compute-0 systemd[1]: Started libpod-conmon-a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f.scope.
Oct  9 11:02:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:19.980803836 +0000 UTC m=+0.023706576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:20.079714793 +0000 UTC m=+0.122617513 container init a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 11:02:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:20.08765944 +0000 UTC m=+0.130562140 container start a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:20.090743389 +0000 UTC m=+0.133646089 container attach a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:02:20 compute-0 wonderful_gould[33418]: 167 167
Oct  9 11:02:20 compute-0 systemd[1]: libpod-a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f.scope: Deactivated successfully.
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:20.091994459 +0000 UTC m=+0.134897169 container died a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 11:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d483102c0f32c3883df951ab32e1d5a4c2f6d2df57dc2f10ad9d34a085c8168-merged.mount: Deactivated successfully.
Oct  9 11:02:20 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  9 11:02:20 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  9 11:02:20 compute-0 podman[33401]: 2025-10-09 11:02:20.128698532 +0000 UTC m=+0.171601222 container remove a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 11:02:20 compute-0 ceph-mon[4705]: Reconfiguring mgr.compute-0.izrudc (monmap changed)...
Oct  9 11:02:20 compute-0 ceph-mon[4705]: Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct  9 11:02:20 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:20 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:20 compute-0 ceph-mon[4705]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 11:02:20 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 11:02:20 compute-0 ceph-mon[4705]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 11:02:20 compute-0 systemd[1]: libpod-conmon-a5831e944b9f527fa33a81f73e4982aef924dab8227298c3c9afecccfd4eb50f.scope: Deactivated successfully.
Oct  9 11:02:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  9 11:02:20 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  9 11:02:20 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  9 11:02:20 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:20 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:20 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct  9 11:02:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct  9 11:02:20 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 11:02:20 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct  9 11:02:20 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:20 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:20 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Oct  9 11:02:20 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Oct  9 11:02:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:20] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Oct  9 11:02:20 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.access.140070047889680] ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:20] "GET /metrics HTTP/1.1" 200 48289 "" "Prometheus/2.51.0"
Oct  9 11:02:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:20 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200039b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:20 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:20 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc003740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:20 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:20 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:20 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:21 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  9 11:02:21 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.112099058 +0000 UTC m=+0.035848917 container create 010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:02:21 compute-0 systemd[1]: Started libpod-conmon-010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b.scope.
Oct  9 11:02:21 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.161689556 +0000 UTC m=+0.085439415 container init 010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.167279806 +0000 UTC m=+0.091029675 container start 010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:02:21 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:21 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000032s ======
Oct  9 11:02:21 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.170265593 +0000 UTC m=+0.094015452 container attach 010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 11:02:21 compute-0 distracted_raman[33520]: 167 167
Oct  9 11:02:21 compute-0 systemd[1]: libpod-010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b.scope: Deactivated successfully.
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.171897335 +0000 UTC m=+0.095647194 container died 010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 11:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-24b36b1009c5a5306918ba1d3685e49ff6bc631632924273c1ebeaa4a2654ba3-merged.mount: Deactivated successfully.
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.097792967 +0000 UTC m=+0.021542846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 11:02:21 compute-0 podman[33504]: 2025-10-09 11:02:21.201604823 +0000 UTC m=+0.125354682 container remove 010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_raman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 11:02:21 compute-0 systemd[1]: libpod-conmon-010c55270ff0c669bc57639eda02d5ee5b873dd21a6116b9dbd88dbf15ac4b8b.scope: Deactivated successfully.
Oct  9 11:02:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:21 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:21 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:21 compute-0 ceph-mon[4705]: Reconfiguring osd.0 (monmap changed)...
Oct  9 11:02:21 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 11:02:21 compute-0 ceph-mon[4705]: Reconfiguring daemon osd.0 on compute-0
Oct  9 11:02:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:21 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:21 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:21 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc003740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:21 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:21 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 11:02:21 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 11:02:21 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 11:02:21 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 11:02:22 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Oct  9 11:02:22 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Oct  9 11:02:22 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:02:22 compute-0 podman[33642]: 2025-10-09 11:02:22.299009953 +0000 UTC m=+0.056036027 container died 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:22 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v28: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Oct  9 11:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bee001972612ab5444561052d9682f385a6d13cdb68ae28a2ecae6049500723-merged.mount: Deactivated successfully.
Oct  9 11:02:22 compute-0 podman[33642]: 2025-10-09 11:02:22.34077903 +0000 UTC m=+0.097805094 container remove 29ed4c27a091227a92647edbe2a039a94f7db6f922d84bc83e788d382be51585 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:22 compute-0 bash[33642]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0
Oct  9 11:02:22 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Oct  9 11:02:22 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:22 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:22 compute-0 ceph-mon[4705]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 11:02:22 compute-0 ceph-mon[4705]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 11:02:22 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@node-exporter.compute-0.service: Failed with result 'exit-code'.
Oct  9 11:02:22 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:02:22 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@node-exporter.compute-0.service: Consumed 1.980s CPU time.
Oct  9 11:02:22 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:02:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:22 compute-0 podman[33751]: 2025-10-09 11:02:22.644267731 +0000 UTC m=+0.034566475 container create cedee0ab2c564c825cf01657e379f7481f4aaf5d5140d4eaa1d49f77f84cbc28 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/695f3296f1367e19d3722a07f75379902a52755bae558209773e84e45b7a8e85/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:22 compute-0 podman[33751]: 2025-10-09 11:02:22.692576798 +0000 UTC m=+0.082875562 container init cedee0ab2c564c825cf01657e379f7481f4aaf5d5140d4eaa1d49f77f84cbc28 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:22 compute-0 podman[33751]: 2025-10-09 11:02:22.697980372 +0000 UTC m=+0.088279116 container start cedee0ab2c564c825cf01657e379f7481f4aaf5d5140d4eaa1d49f77f84cbc28 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:22 compute-0 bash[33751]: cedee0ab2c564c825cf01657e379f7481f4aaf5d5140d4eaa1d49f77f84cbc28
Oct  9 11:02:22 compute-0 podman[33751]: 2025-10-09 11:02:22.629056501 +0000 UTC m=+0.019355265 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.703Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.703Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.704Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.704Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.704Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.704Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=arp
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=bcache
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=bonding
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=cpu
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=dmi
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=edac
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=entropy
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=filefd
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=netclass
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=netdev
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=netstat
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=nfs
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=nvme
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=os
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=pressure
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=rapl
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=selinux
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=softnet
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=stat
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=textfile
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=time
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=uname
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=xfs
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.705Z caller=node_exporter.go:117 level=info collector=zfs
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.706Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0[33766]: ts=2025-10-09T11:02:22.706Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  9 11:02:22 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:02:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:22 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:22 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:22 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a200046c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:22 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:22 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:22 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:22 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:22 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:23 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:23 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 11:02:23 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 11:02:23 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 11:02:23 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 11:02:23 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct  9 11:02:23 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct  9 11:02:23 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:23 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:23 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.453770392 +0000 UTC m=+0.032255380 volume create 95e879df7bd450d1fa8ba81375879b212461e15a9daa9359143d8ca233c7b66e
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.463734374 +0000 UTC m=+0.042219362 container create 3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=bold_faraday, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 systemd[1]: Started libpod-conmon-3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575.scope.
Oct  9 11:02:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c611151c18bb36fd46f5d534d0795103310f44aacb985a8e9cb429055ee33c8/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.441967511 +0000 UTC m=+0.020452519 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.542501983 +0000 UTC m=+0.120986991 container init 3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=bold_faraday, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.549999164 +0000 UTC m=+0.128484152 container start 3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=bold_faraday, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 bold_faraday[33859]: 65534 65534
Oct  9 11:02:23 compute-0 systemd[1]: libpod-3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575.scope: Deactivated successfully.
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.552557516 +0000 UTC m=+0.131042524 container attach 3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=bold_faraday, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 conmon[33859]: conmon 3c6497c339aadfb5e2d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575.scope/container/memory.events
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.554261061 +0000 UTC m=+0.132746059 container died 3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=bold_faraday, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:23 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c611151c18bb36fd46f5d534d0795103310f44aacb985a8e9cb429055ee33c8-merged.mount: Deactivated successfully.
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.595143229 +0000 UTC m=+0.173628217 container remove 3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575 (image=quay.io/prometheus/alertmanager:v0.25.0, name=bold_faraday, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33842]: 2025-10-09 11:02:23.598798306 +0000 UTC m=+0.177283314 volume remove 95e879df7bd450d1fa8ba81375879b212461e15a9daa9359143d8ca233c7b66e
Oct  9 11:02:23 compute-0 systemd[1]: libpod-conmon-3c6497c339aadfb5e2d047635183faff184b55611fbf8f68b3e346b76ff78575.scope: Deactivated successfully.
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.652555389 +0000 UTC m=+0.034396209 volume create 3cb9b162ed25173032444248008031a4192fc2ce4829625194573105d7d94fa3
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.660920669 +0000 UTC m=+0.042761489 container create 41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_mendel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:23 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc003740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:23 compute-0 systemd[1]: Started libpod-conmon-41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f.scope.
Oct  9 11:02:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abb0e17633e6218a6883d0107181b8c5841599ac35e3447c911bbdbd92030a0/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.712359287 +0000 UTC m=+0.094200137 container init 41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_mendel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.717577155 +0000 UTC m=+0.099417975 container start 41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_mendel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 reverent_mendel[33890]: 65534 65534
Oct  9 11:02:23 compute-0 systemd[1]: libpod-41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f.scope: Deactivated successfully.
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.720660894 +0000 UTC m=+0.102501714 container attach 41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_mendel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.721096899 +0000 UTC m=+0.102937719 container died 41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_mendel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.641260725 +0000 UTC m=+0.023101565 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 11:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8abb0e17633e6218a6883d0107181b8c5841599ac35e3447c911bbdbd92030a0-merged.mount: Deactivated successfully.
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.748907475 +0000 UTC m=+0.130748285 container remove 41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f (image=quay.io/prometheus/alertmanager:v0.25.0, name=reverent_mendel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33874]: 2025-10-09 11:02:23.753708339 +0000 UTC m=+0.135549159 volume remove 3cb9b162ed25173032444248008031a4192fc2ce4829625194573105d7d94fa3
Oct  9 11:02:23 compute-0 systemd[1]: libpod-conmon-41e6c96cc3c89771081af5bf09aa0760ad31127e83a6220df948c24e3feb628f.scope: Deactivated successfully.
Oct  9 11:02:23 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:02:23 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[28448]: ts=2025-10-09T11:02:23.933Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Oct  9 11:02:23 compute-0 podman[33938]: 2025-10-09 11:02:23.943409623 +0000 UTC m=+0.040960620 container died e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c23582dab11349f643784c785110f40c91f53ca635a975b97db00fd6edeab18b-merged.mount: Deactivated successfully.
Oct  9 11:02:23 compute-0 podman[33938]: 2025-10-09 11:02:23.972865333 +0000 UTC m=+0.070416330 container remove e5e822fd2f2bd6b5251689b63c2ccf4d78db12443536c16b56b8bef1177cfd7e (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:23 compute-0 podman[33938]: 2025-10-09 11:02:23.977951177 +0000 UTC m=+0.075502184 volume remove a0b6c50a2a31474484748a6ee8545de38e0ba2c0b5d2f916a441fbf3b3979805
Oct  9 11:02:23 compute-0 bash[33938]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0
Oct  9 11:02:24 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct  9 11:02:24 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct  9 11:02:24 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@alertmanager.compute-0.service: Deactivated successfully.
Oct  9 11:02:24 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:02:24 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:02:24 compute-0 podman[34041]: 2025-10-09 11:02:24.292897509 +0000 UTC m=+0.036150477 volume create a5222731dfc8e620e6387361cb6450f9f376014d416d9952a45787bd33065644
Oct  9 11:02:24 compute-0 podman[34041]: 2025-10-09 11:02:24.303950665 +0000 UTC m=+0.047203633 container create a85ab9d4e5e1b1ac9749425f0a707f88932113763b33285ae3f73ed0c83dff62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:24 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Oct  9 11:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bfe679bb72611939900ac58050d38fcb76b8529aaffe4f87af7592535a5e6a/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bfe679bb72611939900ac58050d38fcb76b8529aaffe4f87af7592535a5e6a/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:24 compute-0 podman[34041]: 2025-10-09 11:02:24.375886353 +0000 UTC m=+0.119139331 container init a85ab9d4e5e1b1ac9749425f0a707f88932113763b33285ae3f73ed0c83dff62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:24 compute-0 podman[34041]: 2025-10-09 11:02:24.281823201 +0000 UTC m=+0.025076199 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 11:02:24 compute-0 podman[34041]: 2025-10-09 11:02:24.38509848 +0000 UTC m=+0.128351448 container start a85ab9d4e5e1b1ac9749425f0a707f88932113763b33285ae3f73ed0c83dff62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:24 compute-0 bash[34041]: a85ab9d4e5e1b1ac9749425f0a707f88932113763b33285ae3f73ed0c83dff62
Oct  9 11:02:24 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.427Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.427Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.437Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=172.19.0.101 port=9094
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.439Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct  9 11:02:24 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.475Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.476Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.479Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:24.479Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct  9 11:02:24 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:24 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:24 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:24 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 11:02:24 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 11:02:24 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 11:02:24 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 11:02:24 compute-0 ceph-mon[4705]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 11:02:24 compute-0 ceph-mon[4705]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 11:02:24 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:24 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29fc003740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:24 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:24 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:24 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:24 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:24 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:25 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct  9 11:02:25 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct  9 11:02:25 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:25 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:25 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:25.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.282081521 +0000 UTC m=+0.055047825 container create c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059 (image=quay.io/ceph/grafana:10.4.0, name=gifted_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: Started libpod-conmon-c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059.scope.
Oct  9 11:02:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.260264958 +0000 UTC m=+0.033231282 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.361760989 +0000 UTC m=+0.134727323 container init c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059 (image=quay.io/ceph/grafana:10.4.0, name=gifted_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.374299033 +0000 UTC m=+0.147265327 container start c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059 (image=quay.io/ceph/grafana:10.4.0, name=gifted_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.377231867 +0000 UTC m=+0.150198191 container attach c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059 (image=quay.io/ceph/grafana:10.4.0, name=gifted_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 gifted_heyrovsky[34164]: 472 0
Oct  9 11:02:25 compute-0 systemd[1]: libpod-c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059.scope: Deactivated successfully.
Oct  9 11:02:25 compute-0 conmon[34164]: conmon c04fbcaeb4fcf1b3cf46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059.scope/container/memory.events
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.3804128 +0000 UTC m=+0.153379094 container died c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059 (image=quay.io/ceph/grafana:10.4.0, name=gifted_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-db3e98117a9e072a9e786bd45b0ce742b45b2f65a26087216b5936f76a402efc-merged.mount: Deactivated successfully.
Oct  9 11:02:25 compute-0 podman[34148]: 2025-10-09 11:02:25.426548938 +0000 UTC m=+0.199515232 container remove c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059 (image=quay.io/ceph/grafana:10.4.0, name=gifted_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: libpod-conmon-c04fbcaeb4fcf1b3cf461beb9bf4b01a22d7a32e387feaac513ce0e5cab7f059.scope: Deactivated successfully.
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.501520364 +0000 UTC m=+0.053488415 container create 2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356 (image=quay.io/ceph/grafana:10.4.0, name=sleepy_davinci, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: Started libpod-conmon-2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356.scope.
Oct  9 11:02:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.474235985 +0000 UTC m=+0.026204096 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.568296846 +0000 UTC m=+0.120264917 container init 2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356 (image=quay.io/ceph/grafana:10.4.0, name=sleepy_davinci, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.574451834 +0000 UTC m=+0.126419925 container start 2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356 (image=quay.io/ceph/grafana:10.4.0, name=sleepy_davinci, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 sleepy_davinci[34199]: 472 0
Oct  9 11:02:25 compute-0 systemd[1]: libpod-2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356.scope: Deactivated successfully.
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.578186474 +0000 UTC m=+0.130154525 container attach 2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356 (image=quay.io/ceph/grafana:10.4.0, name=sleepy_davinci, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 conmon[34199]: conmon 2300df6ae129a2948c7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356.scope/container/memory.events
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.57899596 +0000 UTC m=+0.130964112 container died 2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356 (image=quay.io/ceph/grafana:10.4.0, name=sleepy_davinci, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb38bc71650f03174b06379d2ae40260dd33d60e54405f51602a4ac3a5fdb70b-merged.mount: Deactivated successfully.
Oct  9 11:02:25 compute-0 podman[34182]: 2025-10-09 11:02:25.619972801 +0000 UTC m=+0.171940862 container remove 2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356 (image=quay.io/ceph/grafana:10.4.0, name=sleepy_davinci, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: libpod-conmon-2300df6ae129a2948c7c618a96087c9c89c84d6f301fbab1fad6f7e89c139356.scope: Deactivated successfully.
Oct  9 11:02:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:25 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:25 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:02:25 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:25 compute-0 ceph-mon[4705]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 11:02:25 compute-0 ceph-mon[4705]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 11:02:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=server t=2025-10-09T11:02:25.87808264Z level=info msg="Shutdown started" reason="System signal: terminated"
Oct  9 11:02:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=tracing t=2025-10-09T11:02:25.878210194Z level=info msg="Closing tracing"
Oct  9 11:02:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=ticker t=2025-10-09T11:02:25.878275456Z level=info msg=stopped last_tick=2025-10-09T11:02:20Z
Oct  9 11:02:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=grafana-apiserver t=2025-10-09T11:02:25.878527405Z level=info msg="StorageObjectCountTracker pruner is exiting"
Oct  9 11:02:25 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[28976]: logger=sqlstore.transactions t=2025-10-09T11:02:25.890037306Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 11:02:25 compute-0 podman[34249]: 2025-10-09 11:02:25.909755781 +0000 UTC m=+0.070530954 container died 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c79a100980b829e7f67248f9ce59f3474c36f20933ce21bfd785c4d09adec0a6-merged.mount: Deactivated successfully.
Oct  9 11:02:26 compute-0 podman[34249]: 2025-10-09 11:02:26.005231159 +0000 UTC m=+0.166006332 container remove 3267687017e59b7c716a24572fef9c9ab3b7c334fbeda38960a488d4fe864ef2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:26 compute-0 bash[34249]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0
Oct  9 11:02:26 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct  9 11:02:26 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct  9 11:02:26 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@grafana.compute-0.service: Deactivated successfully.
Oct  9 11:02:26 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:02:26 compute-0 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@grafana.compute-0.service: Consumed 3.647s CPU time.
Oct  9 11:02:26 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct  9 11:02:26 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 0 objects/s recovering
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 11:02:26 compute-0 podman[34354]: 2025-10-09 11:02:26.327189386 +0000 UTC m=+0.044232887 container create 900c46e8ba9b8bc46096ee8f189ac90ba935b98a3b5ef4fa31f5b45ee2d65e9b (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd81cd64ad0bb2c400aece8ea45374e3d8c2a7543814c0e9d4a6ae76a9b5f852/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd81cd64ad0bb2c400aece8ea45374e3d8c2a7543814c0e9d4a6ae76a9b5f852/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd81cd64ad0bb2c400aece8ea45374e3d8c2a7543814c0e9d4a6ae76a9b5f852/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd81cd64ad0bb2c400aece8ea45374e3d8c2a7543814c0e9d4a6ae76a9b5f852/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd81cd64ad0bb2c400aece8ea45374e3d8c2a7543814c0e9d4a6ae76a9b5f852/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct  9 11:02:26 compute-0 podman[34354]: 2025-10-09 11:02:26.390293489 +0000 UTC m=+0.107337000 container init 900c46e8ba9b8bc46096ee8f189ac90ba935b98a3b5ef4fa31f5b45ee2d65e9b (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:26 compute-0 podman[34354]: 2025-10-09 11:02:26.396531171 +0000 UTC m=+0.113574672 container start 900c46e8ba9b8bc46096ee8f189ac90ba935b98a3b5ef4fa31f5b45ee2d65e9b (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:26 compute-0 podman[34354]: 2025-10-09 11:02:26.305561609 +0000 UTC m=+0.022605140 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 11:02:26 compute-0 bash[34354]: 900c46e8ba9b8bc46096ee8f189ac90ba935b98a3b5ef4fa31f5b45ee2d65e9b
Oct  9 11:02:26 compute-0 systemd[1]: Started Ceph grafana.compute-0 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:26.439Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000112626s
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559565035Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-09T11:02:26Z
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559825404Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559840714Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559845045Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559848825Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559852445Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559855915Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559859285Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559866535Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559870475Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559873896Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559878106Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559882346Z level=info msg=Target target=[all]
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559896756Z level=info msg="Path Home" path=/usr/share/grafana
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559901436Z level=info msg="Path Data" path=/var/lib/grafana
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559905437Z level=info msg="Path Logs" path=/var/log/grafana
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559909157Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559912967Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=settings t=2025-10-09T11:02:26.559917557Z level=info msg="App mode production"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=sqlstore t=2025-10-09T11:02:26.56030939Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=sqlstore t=2025-10-09T11:02:26.560333011Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=migrator t=2025-10-09T11:02:26.560989481Z level=info msg="Starting DB migrations"
Oct  9 11:02:26 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 11:02:26 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:26 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 11:02:26 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=migrator t=2025-10-09T11:02:26.581682968Z level=info msg="migrations completed" performed=0 skipped=547 duration=521.747µs
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=sqlstore t=2025-10-09T11:02:26.582704342Z level=info msg="Created default organization"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=secrets t=2025-10-09T11:02:26.58329568Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=plugin.store t=2025-10-09T11:02:26.601471756Z level=info msg="Loading plugins..."
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=local.finder t=2025-10-09T11:02:26.679287934Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=plugin.store t=2025-10-09T11:02:26.679317535Z level=info msg="Plugins loaded" count=55 duration=77.847079ms
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=query_data t=2025-10-09T11:02:26.682233939Z level=info msg="Query Service initialization"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=live.push_http t=2025-10-09T11:02:26.685016849Z level=info msg="Live Push Gateway initialization"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ngalert.migration t=2025-10-09T11:02:26.703404431Z level=info msg=Starting
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ngalert.state.manager t=2025-10-09T11:02:26.722103224Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=infra.usagestats.collector t=2025-10-09T11:02:26.723962204Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=provisioning.datasources t=2025-10-09T11:02:26.725921947Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=provisioning.alerting t=2025-10-09T11:02:26.749640962Z level=info msg="starting to provision alerting"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=provisioning.alerting t=2025-10-09T11:02:26.749666852Z level=info msg="finished to provision alerting"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ngalert.state.manager t=2025-10-09T11:02:26.750649455Z level=info msg="Warming state cache for startup"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=grafanaStorageLogger t=2025-10-09T11:02:26.750774399Z level=info msg="Storage starting"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=provisioning.dashboard t=2025-10-09T11:02:26.750941514Z level=info msg="starting to provision dashboards"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ngalert.multiorg.alertmanager t=2025-10-09T11:02:26.751002595Z level=info msg="Starting MultiOrg Alertmanager"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=http.server t=2025-10-09T11:02:26.752574676Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=http.server t=2025-10-09T11:02:26.752903337Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ngalert.state.manager t=2025-10-09T11:02:26.753840017Z level=info msg="State cache has been initialized" states=0 duration=3.14333ms
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ngalert.scheduler t=2025-10-09T11:02:26.753876158Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=ticker t=2025-10-09T11:02:26.754123836Z level=info msg=starting first_tick=2025-10-09T11:02:30Z
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  9 11:02:26 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 11:02:26 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:26 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:26 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 11:02:26 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  9 11:02:26 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=provisioning.dashboard t=2025-10-09T11:02:26.826170319Z level=info msg="finished to provision dashboards"
Oct  9 11:02:26 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 87 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=87 pruub=10.322479248s) [1] r=-1 lpr=87 pi=[55,87)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 229.810974121s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:26 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 87 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=87 pruub=10.322733879s) [1] r=-1 lpr=87 pi=[55,87)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 229.811492920s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:26 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 87 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=87 pruub=10.322680473s) [1] r=-1 lpr=87 pi=[55,87)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.811492920s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:26 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 87 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=87 pruub=10.322360039s) [1] r=-1 lpr=87 pi=[55,87)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.810974121s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:26 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c001ac0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=grafana.update.checker t=2025-10-09T11:02:26.871069896Z level=info msg="Update check succeeded" duration=120.001479ms
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=plugins.update.checker t=2025-10-09T11:02:26.872200592Z level=info msg="Update check succeeded" duration=121.118534ms
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:26 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:26 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:26 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:26 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=grafana-apiserver t=2025-10-09T11:02:26.990691251Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct  9 11:02:26 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0[34369]: logger=grafana-apiserver t=2025-10-09T11:02:26.991126835Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct  9 11:02:27 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.12 deep-scrub starts
Oct  9 11:02:27 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.12 deep-scrub ok
Oct  9 11:02:27 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:27 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:27 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:27 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:27 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:27 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct  9 11:02:27 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct  9 11:02:27 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:27 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:27 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Oct  9 11:02:27 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  9 11:02:27 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  9 11:02:27 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  9 11:02:27 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 88 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=88) [1]/[0] r=0 lpr=88 pi=[55,88)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:27 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 88 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=88) [1]/[0] r=0 lpr=88 pi=[55,88)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:27 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 88 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=88) [1]/[0] r=0 lpr=88 pi=[55,88)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:27 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 88 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=55/56 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=88) [1]/[0] r=0 lpr=88 pi=[55,88)/1 crt=42'1020 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 11:02:27 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:27 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:27 compute-0 ceph-mon[4705]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 11:02:27 compute-0 ceph-mon[4705]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 11:02:27 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 11:02:27 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:27 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:27 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 11:02:28 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Oct  9 11:02:28 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  9 11:02:28 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 89 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=88/89 n=5 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[55,88)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:28 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 89 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=88/89 n=6 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[55,88)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:28 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct  9 11:02:28 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct  9 11:02:28 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:28 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:28 compute-0 ceph-mon[4705]: Reconfiguring osd.1 (monmap changed)...
Oct  9 11:02:28 compute-0 ceph-mon[4705]: Reconfiguring daemon osd.1 on compute-1
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:28 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:28 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:28 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:29 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.e scrub starts
Oct  9 11:02:29 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 12.e scrub ok
Oct  9 11:02:29 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:29 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  9 11:02:29 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  9 11:02:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  9 11:02:29 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:29 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:29 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  9 11:02:29 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  9 11:02:29 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 90 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=88/89 n=6 ec=55/36 lis/c=88/55 les/c/f=89/56/0 sis=90 pruub=15.004760742s) [1] async=[1] r=-1 lpr=90 pi=[55,90)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 237.370727539s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:29 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 90 pg[9.a( v 42'1020 (0'0,42'1020] local-lis/les=88/89 n=6 ec=55/36 lis/c=88/55 les/c/f=89/56/0 sis=90 pruub=15.004649162s) [1] r=-1 lpr=90 pi=[55,90)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.370727539s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:29 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 90 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=88/89 n=5 ec=55/36 lis/c=88/55 les/c/f=89/56/0 sis=90 pruub=14.998586655s) [1] async=[1] r=-1 lpr=90 pi=[55,90)/1 crt=42'1020 lcod 0'0 mlcod 0'0 active pruub 237.364929199s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 11:02:29 compute-0 ceph-osd[12987]: osd.0 pg_epoch: 90 pg[9.1a( v 42'1020 (0'0,42'1020] local-lis/les=88/89 n=5 ec=55/36 lis/c=88/55 les/c/f=89/56/0 sis=90 pruub=14.998520851s) [1] r=-1 lpr=90 pi=[55,90)/1 crt=42'1020 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.364929199s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 11:02:30 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct  9 11:02:30 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct  9 11:02:30 compute-0 ceph-mon[4705]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 11:02:30 compute-0 ceph-mon[4705]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 11:02:30 compute-0 ceph-mon[4705]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct  9 11:02:30 compute-0 ceph-mon[4705]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct  9 11:02:30 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 0 objects/s recovering
Oct  9 11:02:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Oct  9 11:02:30 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 11:02:30 compute-0 systemd-logind[846]: New session 23 of user zuul.
Oct  9 11:02:30 compute-0 systemd[1]: Started Session 23 of User zuul.
Oct  9 11:02:30 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  9 11:02:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:30] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Oct  9 11:02:30 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.access.140070047889680] ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:30] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Oct  9 11:02:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:30 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:30 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:30 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:30 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:30 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:30 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:31 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Oct  9 11:02:31 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Oct  9 11:02:31 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:31 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:31 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:31.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:31 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 11:02:31 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  9 11:02:31 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  9 11:02:31 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 11:02:31 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 11:02:31 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:31 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002f80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:32 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Oct  9 11:02:32 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Oct  9 11:02:32 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 0 objects/s recovering
Oct  9 11:02:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Oct  9 11:02:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 11:02:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:32 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 11:02:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  9 11:02:32 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:32 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003f50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:32 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:32 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c0023e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 11:02:32 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:32 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:32 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:32 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 11:02:32 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  9 11:02:32 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  9 11:02:33 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  9 11:02:33 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  9 11:02:33 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:33 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:33 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 11:02:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 11:02:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 11:02:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 11:02:33 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 11:02:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 11:02:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 11:02:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:33 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:33 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 11:02:33 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 11:02:33 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:33 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:33 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 11:02:33 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:33 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:33 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 11:02:33 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  9 11:02:34 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  9 11:02:34 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  9 11:02:34 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.agiurv (monmap changed)...
Oct  9 11:02:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.agiurv (monmap changed)...
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 11:02:34 compute-0 ceph-mgr[4997]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.agiurv on compute-2
Oct  9 11:02:34 compute-0 ceph-mgr[4997]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.agiurv on compute-2
Oct  9 11:02:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0[34057]: ts=2025-10-09T11:02:34.442Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.00280828s
Oct  9 11:02:34 compute-0 systemd[23076]: Starting Mark boot as successful...
Oct  9 11:02:34 compute-0 systemd[23076]: Finished Mark boot as successful.
Oct  9 11:02:34 compute-0 ceph-mon[4705]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 11:02:34 compute-0 ceph-mon[4705]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 11:02:34 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:34 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 11:02:34 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:34 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 11:02:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:34 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002f80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:34 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:34 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003f50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:34 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 11:02:34 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:34 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:34 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 11:02:35 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Oct  9 11:02:35 compute-0 ceph-osd[12987]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [DBG] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct  9 11:02:35 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:35 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:35 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO root] Restarting engine...
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:02:35] ENGINE Bus STOPPING
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:02:35] ENGINE Bus STOPPING
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:02:35] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:02:35] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:02:35] ENGINE Bus STOPPED
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:02:35] ENGINE Bus STARTING
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:02:35] ENGINE Bus STOPPED
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:02:35] ENGINE Bus STARTING
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  9 11:02:35 compute-0 ovs-vsctl[34647]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 11:02:35 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:02:35] ENGINE Serving on http://:::9283
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:02:35] ENGINE Serving on http://:::9283
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: [09/Oct/2025:11:02:35] ENGINE Bus STARTED
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:02:35] ENGINE Bus STARTED
Oct  9 11:02:35 compute-0 ceph-mgr[4997]: [prometheus INFO root] Engine started.
Oct  9 11:02:35 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  9 11:02:35 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:35 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c0023e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:35 compute-0 ceph-mon[4705]: Reconfiguring mgr.compute-2.agiurv (monmap changed)...
Oct  9 11:02:35 compute-0 ceph-mon[4705]: Reconfiguring daemon mgr.compute-2.agiurv on compute-2
Oct  9 11:02:35 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:35 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:35 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 11:02:35 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:35 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 11:02:35 compute-0 podman[34782]: 2025-10-09 11:02:35.918207143 +0000 UTC m=+0.055749497 container exec 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 11:02:36 compute-0 podman[34782]: 2025-10-09 11:02:36.011236262 +0000 UTC m=+0.148778616 container exec_died 704febf2c4e8b226e5db905d561e2b97bbca371df1bc491bc1d5f83f549e9b78 (image=quay.io/ceph/ceph:v19, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 11:02:36 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:36 compute-0 podman[35122]: 2025-10-09 11:02:36.537649439 +0000 UTC m=+0.053513626 container exec cedee0ab2c564c825cf01657e379f7481f4aaf5d5140d4eaa1d49f77f84cbc28 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:36 compute-0 podman[35122]: 2025-10-09 11:02:36.548226649 +0000 UTC m=+0.064090806 container exec_died cedee0ab2c564c825cf01657e379f7481f4aaf5d5140d4eaa1d49f77f84cbc28 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:36 compute-0 lvm[35275]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 11:02:36 compute-0 lvm[35275]: VG ceph_vg0 finished
Oct  9 11:02:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  9 11:02:36 compute-0 podman[35303]: 2025-10-09 11:02:36.840477629 +0000 UTC m=+0.063122846 container exec ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 11:02:36 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:36 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a0c003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:36 compute-0 podman[35303]: 2025-10-09 11:02:36.849799009 +0000 UTC m=+0.072444216 container exec_died ac8946a354724241794e82fd9152fd4df29b235e0b4cc57ac407c2c8538fae1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 11:02:36 compute-0 kernel: block vda: the capability attribute has been deprecated.
Oct  9 11:02:36 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:36 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002f80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:36 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  9 11:02:36 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  9 11:02:36 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:36 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:36 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:37 compute-0 podman[35428]: 2025-10-09 11:02:37.08094725 +0000 UTC m=+0.061239585 container exec 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:02:37 compute-0 podman[35428]: 2025-10-09 11:02:37.115340428 +0000 UTC m=+0.095632753 container exec_died 03d54a105c729a20fc67bea7058c7046089d7c7e98e45e40d470932571e9a49f (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-0-zhclxd)
Oct  9 11:02:37 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:37 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:37 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:37 compute-0 podman[35571]: 2025-10-09 11:02:37.319335693 +0000 UTC m=+0.057319768 container exec f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  9 11:02:37 compute-0 podman[35571]: 2025-10-09 11:02:37.359336422 +0000 UTC m=+0.097320497 container exec_died f6e4c8a175c46b855a160bc006fcb9eec3699404e427b7516700543116394f01 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-0-wkoquj, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Oct  9 11:02:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 11:02:37 compute-0 podman[35740]: 2025-10-09 11:02:37.57955281 +0000 UTC m=+0.051284484 container exec a85ab9d4e5e1b1ac9749425f0a707f88932113763b33285ae3f73ed0c83dff62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:37 compute-0 podman[35740]: 2025-10-09 11:02:37.613227576 +0000 UTC m=+0.084959240 container exec_died a85ab9d4e5e1b1ac9749425f0a707f88932113763b33285ae3f73ed0c83dff62 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:37 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:37 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003f50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:37 compute-0 podman[35892]: 2025-10-09 11:02:37.809170581 +0000 UTC m=+0.048623088 container exec 900c46e8ba9b8bc46096ee8f189ac90ba935b98a3b5ef4fa31f5b45ee2d65e9b (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  9 11:02:37 compute-0 podman[35892]: 2025-10-09 11:02:37.9866249 +0000 UTC m=+0.226077407 container exec_died 900c46e8ba9b8bc46096ee8f189ac90ba935b98a3b5ef4fa31f5b45ee2d65e9b (image=quay.io/ceph/grafana:10.4.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 11:02:37 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  9 11:02:38 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  9 11:02:38 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:38 compute-0 podman[36113]: 2025-10-09 11:02:38.363065514 +0000 UTC m=+0.057564327 container exec a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:38 compute-0 podman[36113]: 2025-10-09 11:02:38.400243451 +0000 UTC m=+0.094742244 container exec_died a0a563e2a358f6143511e62777dc410ff2bbc498986f2fd4e9b5445f7ec86d04 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 11:02:38 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 11:02:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:38 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c0030f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:38 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:38 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c0030f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:38 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:38 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:38 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:39 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:39 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:39 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:39 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:39 compute-0 ceph-mon[4705]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 11:02:39 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:39 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a08002f80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:40 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  9 11:02:40 compute-0 ceph-mgr[4997]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 1 activating, 1 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 11:02:40 compute-0 systemd[1]: Starting Hostname Service...
Oct  9 11:02:40 compute-0 systemd[1]: Started Hostname Service.
Oct  9 11:02:40 compute-0 ceph-mon[4705]: log_channel(audit) log [INF] : from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-0-izrudc[4993]: ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:40] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Oct  9 11:02:40 compute-0 ceph-mgr[4997]: [prometheus INFO cherrypy.access.140070047889680] ::ffff:192.168.122.100 - - [09/Oct/2025:11:02:40] "GET /metrics HTTP/1.1" 200 48281 "" "Prometheus/2.51.0"
Oct  9 11:02:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:40 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a00003f50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:40 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:40 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c0030f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:40 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:40 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.001000033s ======
Oct  9 11:02:40 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:40.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Oct  9 11:02:41 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  9 11:02:41 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  9 11:02:41 compute-0 radosgw[19620]: ====== starting new request req=0x7efcd90705d0 =====
Oct  9 11:02:41 compute-0 radosgw[19620]: ====== req done req=0x7efcd90705d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 11:02:41 compute-0 radosgw[19620]: beast: 0x7efcd90705d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 11:02:41 compute-0 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-2-0-compute-0-akqbal[27216]: 09/10/2025 11:02:41 : epoch 68e795e9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2a2c0030f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 11:02:41 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:41 compute-0 ceph-mon[4705]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct  9 11:02:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  9 11:02:42 compute-0 ceph-mon[4705]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  9 11:02:42 compute-0 ceph-mon[4705]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
